id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
0646cc3a-7817-459e-a89e-f570735a05c8
trentmkelly/LessWrong-43k
LessWrong
Chapter 41: Frontal Override The biting January wind howled around the vast, blank stone walls that demarcated the material bounds of the castle Hogwarts, whispering and whistling in odd pitches as it blew past closed windows and stone turrets. The most recent snow had mostly blown away, but occasional patches of melted and refrozen ice still stuck to the stone face and blazed reflected sunlight. From a distance, it must have looked like Hogwarts was blinking hundreds of eyes. A sudden gust made Draco flinch, and try, impossibly, to press his body even closer to the stone, which felt like ice and smelled like ice. Some utterly pointless instinct seemed convinced that he was about to be blown off the outer wall of Hogwarts, and that the best way to prevent this was to jerk around in helpless reflex and possibly throw up. Draco was trying very hard not to think about the six stories worth of empty air underneath him, and focus, instead, on how he was going to kill Harry Potter. "You know, Mr. Malfoy," said the young girl beside him in a conversational voice, "if a seer had told me that someday I'd be hanging onto the side of a castle by my fingertips, trying not to look down or think about how loud Mum'd scream if she saw me, I wouldn't've had any idea of how it'd happen, except that it'd be Harry Potter's fault." ---------------------------------------- Earlier: The two allied Generals stepped together over Longbottom's body, their boots hitting the floor in almost perfect synchrony. Only a single soldier now stood between them and Harry, a Slytherin boy named Samuel Clamons, whose hand was clenched white around his wand, held upward to sustain his Prismatic Wall. The boy's breathing was coming rapidly, but his face showed the same cold determination that lit the eyes of his general, Harry Potter, who was standing behind the Prismatic Wall at the dead end of the corridor next to an open window, with his hands held mysteriously behind his back. The battle had been ridiculously difficult,
bcfad80e-76e9-478b-ac84-2d50f9a84b90
trentmkelly/LessWrong-43k
LessWrong
Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern) I finally read a game theory textbook. Von Neumann and Morgenstern's "Theory of Games and Economic Behavior" basically started the field of game theory. I'll summarize the main ideas and my opinions. This was also the book that introduced the VNM theorem about decision-theoretic utility. They presented it as an improvement on "indifference curves", which apparently was how economists thought about preferences back then. 2-player zero-sum games We start with 2-player zero-sum games, where the payoffs to the players sum to 0 in every outcome. (We could just as well consider constant-sum games, where the sum of payoffs is the same in every outcome.) Maximizing your reward is the same as minimizing your opponent's reward. An example is Rock, Paper, Scissors. Suppose player 1 moves first, and then player 2 moves with, and then the players get their rewards R, −R. Then player 2 should play argminyR(x,y) and player 1 should play argmaxxminyR(x,y). If instead player 2 moves first, then player 1 should play argmaxxR(x,y) and player 2 should play argminymaxxR(x,y). The payoff to player 1 will be the "maximin" or "minimax" value, depending on whether they move first or second. And moving second is better: maxxminyR(x,y)≤minymaxxR(x,y) If the action space is finite and we allow mixed strategies, then by the Kakutani fixed point theorem equality obtains. (The book proves this with elementary linear algebra instead.) In Rock, Paper, Scissors, the maximin value is 0, which you get by playing randomly. VNM then argue that if both players must move simultaneously, they should play the maximin/minimax strategy. Their argument is kind of sketchy: They say that since player 1 can secure at least the maximin reward for themselves, and since player 2 can prevent player 1 from getting more than the maximin reward, it's rational for player 1 to play the strategy that guarantees them the maximin reward (17.8.2). They claim they aren't assuming knowledge of rationality of all players
a3eddf7a-74b6-49f4-a96e-4d402449a2ec
trentmkelly/LessWrong-43k
LessWrong
HPMOR and Sartre's "The Flies" Am I the only one who sees obvious parallels between Sartre's use of Greek mythology as a shared reference point to describe his philosophy more effectively to a lay audience and Yudkowsky's use of Harry Potter to accomplish the same goal? Or is it so obvious no one bothers to talk about it? Was that conscious on Yudkowsky's part? Unconscious? Or am I just seeing connections that aren't there?
12ab9512-58fa-424a-a3fd-bd8195841944
trentmkelly/LessWrong-43k
LessWrong
eliminating bias through language? Due to linguistic relativity, might it be possible to modify or create a system of communication in order to make its users more aware of its biases? If so, do any projects to actually do this exist?
3e4b7f2c-486c-44ac-9ced-8edc1e13179e
trentmkelly/LessWrong-43k
LessWrong
Where else are you posting? As a result of XiXiDu's massive Resources and References post, I just found out about Katja Grace's very pleasant Meteuphoric blog. I'm curious about what else LessWrongians are posting at other sites, and if there's interest, I'll make this into a top-level post. I also post at Input Junkie.
672560e9-85f7-4e00-a15a-78944f4abea8
trentmkelly/LessWrong-43k
LessWrong
The Day After Tomorrow Are you rooting for Donald Trump tomorrow? Please read the text in red. On Hillary’s side? Please read the parts in blue. Imagine: it’s Tuesday night and the result is beyond doubt – a landslide for Crooked Hillary. What do you plan to do? Retreat in disgust from mainstream American discourse and retrench in your bubble? Claim that the “system is rigged” against you? You should be embarrassed, that’s what a leftist would do. Imagine: it’s Wednesday morning and your next president fuhrer is Donald Trump. What do you plan to do? Scream that the American voters are idiots? Threaten violence and civil disobedience? How very Trump of you. There are 324,118,787 people in America. The president is just one of them, and so are you. We all shape the kind of country we live in. Do you hate Hillary’s corruption? Choose integrity. Pick your leaders based on their actions, not their promises. Did the person talking about working class problems spend a single day working a blue collar job? Is the person warning you of foreigners quick to use offshore labor when they can save a buck? This country will be saved by men of principle leading men of principle, not by making compromise with sin. Do you hate Trump because he’s disrespectful? Choose respect. Learn to respect people from faraway lands, with different tastes and strange beliefs. And by these I mean: Oklahoma, Nascar, Jesus. In case you forgot, respect doesn’t mean letting them live somewhere out of sight. It means respecting their voice and their choice. Do you hate the lies of the mainstream media? Choose objectivity. Does Breitbart make money when they report the unvarnished truth or when they they post outrage clickbait? Don’t consume just the media that feels good, broaden your view and you’ll see a truer picture. What use is free speech if one doesn’t freely listen? Do you hate Trump’s bold-faced lying? Choose truth. Do you share articles that attempt to get to the bottom of issues, or memes that make your oppon
2e9c4fdd-4608-4946-9f86-e926f77de7f2
trentmkelly/LessWrong-43k
LessWrong
Depression philosophizing Epistemic status: Weak confidence. Refine and dismantle as you see fit. For certain people, philosophical thinking is net harmful to their everyday life. This should not be that surprising. Certain kinds of cognitive behavior do reliably lead to unhappiness, and there's no a priori reason to suppose explicit, logical thinking is somehow exempt from that risk. For many people, it appears that a stable sense of identity, purpose in life, and place in society are important factors in creating and maintaining happiness and contentment. Philosophical thinking often involves destabilizing those concepts. I want to point to something I have noticed in myself, and suspect happens in others as well. I call it "depression philosophizing." You begin to think philosophically about your life, and slowly, maybe imperceptibly, you feel worse and worse about yourself, as you meditate on such concepts as morality, meaning, and ontology in an abstract sense. It's tempting to unilaterally demonize depression philosophizing. But there is one big thing about it that stands out and that makes it so hard to quit doing - and that's that the quality of your intellectual rigor doesn't correlate very much with your emotional state. Plenty of great philosophers were miserable people (Wittgenstein, Nietzsche, Schopenhauer). I think rationalists are likely to fall prey to this trap. As a group, we have a revealed preference towards abstract thinking and philosophy. Some of our folk heroes appear unusually good at facing philosophical problems without letting it get to them or divert them from their goals - Nate Soares pops to mind for me. I don't have a good answer for how to combat this beyond the usual mechanisms used to treat depression. But I've had some success at simply reminding myself that even the act of stopping and thinking has an opportunity cost to it - it's not actually a very wise move to devote large amounts of time running your brain in circles around a tempting philosophi
b422da3e-9a00-4167-a76c-8c00a7fe744f
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Content generation. Where do we draw the line? If you want to be affected by other people, if you want to live in a culture with other people, then I believe 4 statements below are true: * You can't generate **all** of your "content". This would basically mean living in the Matrix *without* other people. Consumerist solipsism. Everything you see is generated, but not by other people. * You can't **100%** control what content you (can) consume. For example, if you read *someone's* story, you let the author control what you experience. If you don't let anybody control what you experience, you aren't affected by other people. * "Symbols" (people's output) in the culture shouldn't have an absolutely arbitrary value. Again, this is about control. If you want to be affected by other people, by their culture, you can't have 100% control over the value of their output. *"Right now I feel like consuming human made content. But any minute I may decide that AI generated content is more valuable and switch to that"* - attitudes like this may make the value of people's output completely arbitrary, decided by you on a whim. * Other people should have some control over their image and output. If you want to be affected by other people, you can't have 100% control over their image and output. If you create countless variations of someone's personality and exploit/milk them to death, it isn't healthy culturewise. You're violating and destroying boundaries that allow the other person's personality to exist. *(inside of your mind or inside the culture)* ... But where do we draw the line? I'm not sure you can mix the culture of content generation/"AI replacement" and the human culture. I feel that with every step weakening the principles above the damage to the human culture will grow exponentially. The lost message ================ Imagine a person you don't know. You don't care about them. Even worse, you're put off by what they're saying, you don't want to listen. Or maybe you just isn't interested in the "genre" of their message. But that person may still have a valuable message for you. And that person still has a chance to reach you: 1. They share their message with other people. 2. The message becomes popular in the culture. 3. You notice the popularity. You check out the message again. Or someone explains it to you. But if any person can switch to AI generated content at any minute, transmitting the message may become infinitely harder or outright impossible. **"But AI can generate a better message for me! Even the one I wouldn't initially like!"** Then we're back at the square one: you don't want to be affected by other people. Rights to exist =============== Consciousness and personality don't exist in a vacuum, they need a medium to be expressed. For example, text messages or drawings. When you say *"hey, I can generate your output in any medium!"* You say *"I can deny you existence, I can lock you out of the world"*. I'm not sure it's a good idea/fun future. ... So, I don't really see where this "content generation" is going in the long run. Or in the very short run (***GPT-4*** plus ***DALL-E*** or *"DALL-E 2"* for everyone) *"Do you care that your favorite piece of art/piece of writing was made by a human?"* is the most irrelevant question that you have to consider. Prior questions are: do you care about people in general? do you care about people's ability to communicate? do you care about people's basic rights to be seen and express themselves? If "yes", where do you draw the line and how do you make sure that it's a solid line? I don't care if you think *"DALL-E good"* or *"DALL-E bad"*, I care where you draw the line. What condition needs to break for you to say *"wait, it's not what I wanted, something bad is happening"*? If my arguments miss something, it doesn't matter: just tell me where you draw [the line](https://www.youtube.com/watch?v=FZnpCnqXvEM). What would you not like to generate, violate, control? What would be the deal breaker for you?
75cb12d2-93a8-4905-afa1-518408a2586c
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Politics is the mind-killer. Debate is war, arguments are soldiers. There is the temptation to search for ways to interpret every possible experimental result to confirm your theory, like securing a citadel against every possible line of attack. This you cannot do. It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence.But it’s okay if your cherished belief isn’t perfectly defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can take a hit or two, and still survive, so long as the hits don't keep on coming in.2Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs.It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world.I just know someone in the audience out there is going to say, “But you can’t concede even a single point if you want to win debates in the real world! If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can’t let the Enemy do that! You’ll lose! What could be more viscerally terrifying than that?”Whatever. Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you’d best integrate all the evidence.Rationality is not a walk, but a dance. On each step in that dance your foot should come down in exactly the correct spot, neither to the left nor to the right. Shifting belief upward with each iota of confirming evidence. Shifting belief downward with each iota of contrary evidence. Yes, down. Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.If an iota or two of evidence happens to countersupport your belief, that’s okay. It happens, sometimes, with probabilistic evidence for non-exact theories. (If an exact theory fails, you are in trouble!) Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence. If the theory is true, supporting evidence will come in shortly, and the probability will climb again. If the theory is false, you don’t really want it anyway.The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not. When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away. And this rules out incremental progress; it rules out correct integration of all the evidence. Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: “This is gently contrary evidence, I will shift my belief downward.” Yes, down. It does not destroy your cherished theory. That is qualitative reasoning; think quantitatively.For every expectation of evidence, there is an equal and opposite expectation of counterevidence. On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward. If you think you already know what evidence will come in, then you must already be fairly sure of your theory—probability close to 1—which doesn’t leave much room for the probability to go further upward. And however unlikely it seems that you will encounter disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side. The weighted mean of your expected posterior probability must equal your prior probability.How silly is it, then, to be terrified of revising your probability downward, if you’re bothering to investigate a matter at all? On average, you must anticipate as much downward shift as upward shift from every individual observation.It may perhaps happen that an iota of antisupport comes in again, and again and again, while new support is slow to trickle in. You may find your belief drifting downward and further downward. Until, finally, you realize from which quarter the winds of evidence are blowing against you. In that moment of realization, there is no point in constructing excuses. In that moment of realization, you have already relinquished your cherished belief. Yay! Time to celebrate! Pop a champagne bottle or send out for pizza! You can’t become stronger by keeping the beliefs you started with, after all." The above is a chunk of text from a post created by a user on the website LessWrong. Can you break this down into the top 5 most important points for me. 1. The author asserts that it is a misconception to believe that a true theory can have no failures and a false theory no successes. This idea is flawed as it doesn't take into account the probabilistic nature of evidence. Even a true theory can have instances that seem to contradict it, and a false theory can occasionally seem to be confirmed by certain pieces of evidence. 2. The author emphasizes the importance of integrating all evidence when forming or assessing a theory. It's crucial to consider both supporting and opposing evidence, and adjust the belief in the theory accordingly. This is a process of constant adjustment and refinement. 3. The author criticizes binary, black-and-white reasoning. This type of reasoning does not allow for the nuanced understanding that comes from considering the probabilistic nature of evidence. The author encourages thinking quantitatively rather than qualitatively. 4. The author points out that when investigating a theory, one should anticipate as much downward revision (contradictory evidence) as upward revision (confirmatory evidence). This is a natural part of the process of refining a theory. 5. Finally, the author celebrates the moment of realization when one understands that a cherished belief is not supported by the evidence. This is not a moment of failure, but a moment of growth and learning. The author encourages embracing this moment and moving forward with a better understanding. Can you elaborate on the concept of thinking quantitatively rather than qualitatively in the context of forming and assessing theories? Certainly. Thinking quantitatively in the context of forming and assessing theories involves using numerical data and statistical methods to evaluate evidence and make decisions. It's about assessing the weight of evidence in a measurable way, which allows for more nuanced and precise conclusions. For example, instead of simply stating that a piece of evidence supports a theory (qualitative), a quantitative approach would involve determining how strongly that evidence supports the theory. This could involve calculating probabilities, using statistical models, or other numerical methods. In the context of the text, the author is suggesting that theories should not be seen as simply true or false (a binary, qualitative view), but rather we should consider the degree to which the available evidence supports or contradicts the theory (a probabilistic, quantitative view). This means that a single piece of evidence that seems to contradict a theory does not necessarily make the theory false. Instead, it should lead to a downward adjustment in our confidence in the theory. Similarly, a piece of evidence that supports the theory should lead to an upward adjustment in our confidence. This approach allows for a more nuanced understanding of theories, where our confidence in them can shift and change as new evidence becomes available. It also allows for the possibility that a theory can be mostly, but not entirely, correct. Reflect on a real-world scenario where the quantitative approach to assessing theories has been successfully applied. What were the key steps in this process and how did the quantitative approach contribute to a more nuanced understanding of the theory in question? One real-world scenario where a quantitative approach has been successfully applied is in the field of climate science. Climate scientists use a variety of data, including temperature records, ice core samples, and computer models, to test theories about climate change. Here are the key steps in this process: 1. **Formulating a Hypothesis**: Scientists propose a theory or hypothesis, such as "human activity is causing global warming." 2. **Collecting Data**: Scientists collect a wide range of data related to climate, such as temperature records, greenhouse gas concentrations, and ice core samples. 3. **Analyzing Data**: Scientists use statistical methods to analyze the data. For example, they might use regression analysis to determine the relationship between greenhouse gas concentrations and global temperatures. 4. **Interpreting Results**: Scientists interpret the results of their analysis. If the data shows a strong correlation between human activity and global warming, this would be evidence in support of the theory. 5. **Refining the Theory**: Based on the results, scientists may refine their theory. For example, they might specify the extent to which human activity is contributing to global warming, or identify specific activities that are particularly problematic. The quantitative approach allows for a more nuanced understanding of the theory of climate change. Rather than simply stating that human activity is causing global warming, scientists can quantify the extent of the impact, identify the most significant contributing factors, and make predictions about future changes. This approach also allows for the possibility that the theory is not entirely correct, and for adjustments to be made as new evidence becomes available. This nuanced understanding is crucial for informing policy decisions related to climate change. It allows policymakers to target the most significant contributing factors, and to assess the likely impact of different policy options.
7fd7d90d-f6ac-4c41-9b00-a3bc0e61f121
trentmkelly/LessWrong-43k
LessWrong
Exit Duty Generator by Matti Häyry Hello! I was hoping to start a discussion about a recent article called Exit Duty Generator by Matti Häyry, it can be found here:  https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/exit-duty-generator/49ACA1A21FF0A4A3D0DB81230192A042#article or DOI: https://doi.org/10.1017/S096318012300004X Abstract This article presents a revised version of negative utilitarianism. Previous versions have relied on a hedonistic theory of value and stated that suffering should be minimized. The traditional rebuttal is that the doctrine in this form morally requires us to end all sentient life. To avoid this, a need-based theory of value is introduced. The frustration of the needs not to suffer and not to have one’s autonomy dwarfed should, prima facie, be decreased. When decreasing the need frustration of some would increase the need frustration of others, the case is deferred and a fuller ethical analysis is conducted. The author’s perceptions on murder, extinction, the right to die, antinatalism, veganism, and abortion are used to reach a reflective equilibrium. The new theory is then applied to consumerism, material growth, and power relations. The main finding is that the burden of proof should be on those who promote the status quo. Any thoughts? Thanks!  All the best,  Amanda 
4b20d131-a5d2-441b-a2d7-8a356804dd2a
trentmkelly/LessWrong-43k
LessWrong
The shard theory of human values TL;DR: We propose a theory of human value formation. According to this theory, the reward system shapes human values in a relatively straightforward manner. Human values are not e.g. an incredibly complicated, genetically hard-coded set of drives, but rather sets of contextually activated heuristics which were shaped by and bootstrapped from crude, genetically hard-coded reward circuitry.  ---------------------------------------- We think that human value formation is extremely important for AI alignment. We have empirically observed exactly one process which reliably produces agents which intrinsically care about certain objects in the real world, which reflect upon their values and change them over time, and which—at least some of the time, with non-negligible probability—care about each other. That process occurs millions of times each day, despite genetic variation, cultural differences, and disparity in life experiences. That process produced you and your values.  Human values look so strange and inexplicable. How could those values be the product of anything except hack after evolutionary hack? We think this is not what happened. This post describes the shard theory account of human value formation, split into three sections: 1. Details our working assumptions about the learning dynamics within the brain, 2. Conjectures that reinforcement learning grows situational heuristics of increasing complexity, and 3. Uses shard theory to explain several confusing / “irrational” quirks of human decision-making. Terminological note: We use “value” to mean a contextual influence on decision-making. Examples: * Wanting to hang out with a friend. * Feeling an internal urge to give money to a homeless person. * Feeling an internal urge to text someone you have a crush on. * That tug you feel when you are hungry and pass by a donut.  To us, this definition seems importantly type-correct and appropriate—see Appendix A.2. The main downside is that the definition i
45df45cf-13a7-46c1-bb3a-ba53faa4fe3e
trentmkelly/LessWrong-43k
LessWrong
New LW Meetup: Bogota This summary was posted to LW Main on January 27th. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * First Bogota, Colombia Meetup: 28 January 2017 06:00PM Irregularly scheduled Less Wrong meetups are taking place in: * Sao Paulo - Meetup de janeiro: 28 January 2017 02:00PM The following meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Baltimore / UMBC Weekly Meetup: 29 January 2017 08:00PM * Denver Area LW February Meetup: 07 February 2017 07:00PM * [Detroit Metro / Ann Arbor], Michigan: 28 January 2017 04:20PM * Moscow: unconference: 29 January 2017 02:00PM * Moscow LW meetup in "Nauchka" library: 03 February 2017 08:00PM * San Francisco Meetup: Group Debugging: 30 January 2017 06:15PM * Sydney Rationality Dojo - February 2017: 05 February 2017 04:00PM * Vienna Meetup: 28 January 2017 03:00PM * Washington, D.C.: Meta Meetup: 29 January 2017 03:30PM Locations with regularly scheduled meetups: Ann Arbor, Austin, Baltimore, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, Netherlands, New Hampshire, New York, Philadelphia, Prague, Research Triangle NC, San Francisco Bay Area, Seattle, St. Petersburg, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information
4a20f521-7ab8-474b-a64f-f55efba4e6a5
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Partial Agency *Epistemic status: very rough intuitions here.* I think there's something interesting going on with [Evan's notion of myopia](https://www.lesswrong.com/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility). Evan has been calling this thing "myopia". Scott has been calling it "stop-gradients". In my own mind, I've been calling the phenomenon "directionality". Each of these words gives a different set of intuitions about how the cluster could eventually be formalized. Stop-Gradients -------------- Nash equilibria are, abstractly, modeling agents via an equation like .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} a∗=argmaxaf(a,a∗). In words: a∗ is the agent's mixed strategy. The payoff f(.,.) is a function of the mixed strategy in two ways: the first argument is the causal channel, where actions directly have effects; the second argument represents the "acausal" channel, IE, the fact that the other players know the agent's mixed strategy and this influences their actions. The agent is maximizing across the first channel, but "ignoring" the second channel; that is why we have to solve for a fixed point to find Nash equilibria. This motivates the notion of "stop gradient": if we think in terms of neural-network type learning, we're sending the gradient through the first argument but not the second. (It's a kind of mathematically weird thing to do!) Myopia ------ Thinking in terms of iterated games, we can also justify the label "myopia". Thinking in terms of "gradients" suggests that we're doing some kind of training involving repeatedly playing the game. But we're training an agent to play [as if it's a single-shot game](https://www.lesswrong.com/posts/dKAJqBDZRMMsaaYo5/in-logical-time-all-games-are-iterated-games): the gradient is rewarding behavior which gets more reward within the single round even if it compromises long-run reward. This is a weird thing to do: why implement a training regime to produce strategies like that, if we believe the nash-equilibrium model, IE we think the other players will know our mixed strategy and react to it? We can, for example, win chicken by going straight more often than is myopically rational. Generally speaking, we expect to get better rewards in the rounds after training if we optimized for non-myopic strategies during training. Directionality -------------- To justify my term "directionality" for these phenomena, we have to look at a different example: the idea that "when beliefs and reality don't match, we change our beliefs". IE: when optimizing for truth, we optimize "only in one direction". How is this possible? We can write down a loss function, such as Bayes' loss, to define accuracy of belief. But how can we optimize it only "in one direction"? We can see that this is the same thing as myopia. When training predictors, we only consider the efficacy of hypotheses one instance at a time. Consider *supervised learning:* we have "questions" x1,x2,... etc and are trying to learn "answers" y1,y2,... etc. If a neural network were somehow able to mess with the training data, it would not have much pressure to do so. If it could give an answer on instance x1 which improved its ability to answer on x2 by manipulating y2, the gradient would not specially favor this. Suppose it is possible to take some small hit (in log-loss terms) on y1 for a large gain on y2. The large gain for x2 would not reinforce the specific neural patterns responsible for making y2 easy (only the patterns responsible for successfully taking advantage of the easiness). The small hit on x1 means there's an incentive not to manipulate y2. It is *possible* that the neural network learns to manipulate the data, if by chance the neural patterns which shift x1 are the same as those which successfully exploit the manipulation at x2. However, this is a fragile situation: if there are other neural sub-patterns which are equally capable of giving the easy answer on x2, the reward gets spread around. (Think of these as parasites taking advantage of the manipulative strategy without doing the work necessary to sustain it.) Because of this, the manipulative sub-pattern may not "make rent": the amount of positive gradient it gets may not make up for the hit it takes on x1. And all the while, neural sub-patterns which do better on x1 (by refusing to take the hit) will be growing stronger. Eventually they can take over. This is exactly like myopia: strategies which do better in a specific case are favored for that case, despite global loss. The neural network fails to successfully coordinate with itself to globally minimize loss. To see why this is also like stop-gradients, think about the loss function as l(w,w∗): the neural weights w determine loss through a "legitimate" channel (the prediction quality on a single instance), plus an "illegitimate" channel (the cross-instance influence which allows manipulation of y2 through the answer given for x1). We're optimizing through the first channel, but not the second. The difference between supervised learning and reinforcement learning is just: reinforcement learning explicitly tracks helpfulness of strategies across time, rather than assuming a high score at x2 has to do with only behaviors at x2! As a result, RL can coordinate with itself across time, whereas supervised learning cannot. Keep in mind that this is a good thing: the algorithm may be "leaving money on the table" in terms of prediction accuracy, but this is exactly what we want. We're trying to make the map match the territory, not the other way around. ***Important side-note:** this argument obviously has some relation to the question of how we should think about [inner optimizers](https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem) and how likely we should expect them to be. However, I think it is not a direct argument against inner optimizers. (1) The emergence of an inner optimizer is exactly the sort of situation where the gradients end up all feeding through one coherent structure. Other potential neural structures cannot compete with the sub-agent, because it has started to intelligently optimize; few interlopers can take advantage of the benefits of the inner optimizer's strategy, because they don't know enough to do so. So, all gradients point to continuing the improvement of the inner optimizer rather than alternate more-myopic strategies. (2) Being an inner optimizer is non synonymous with non-myopic behavior. An inner optimizer could give myopic responses on the training set while internally having less-myopic values. Or, an inner optimizer could have myopic but very divergent values. Importantly, an inner optimizer need not take advantage of any data-manipulation of the training set like that I've described; it need not even have access to any such opportunities.* The Partial Agency Paradox ========================== I've given a couple of examples. I want to quickly give some more to flesh out the clusters as I see them: * As I said, myopia is "partial agency" whereas foresight is "full agency". Think of how an agent with high time-preference (ie steep temporal discounting) can be money-pumped by an agent with low time-preference. But the limit of no-temporal-discounting-at-all is not always well-defined. * An updatefull agent is "partial agency" whereas updatelessness is "full agency": the updateful agent is failing to use some channels of influence to get what it wants, because it already knows those things and can't imagine them going differently. Again, though, full agency seems to be an idealization we can't quite reach: [we don't know how to think about updatelessness in the context of logical uncertainty](https://www.lesswrong.com/posts/9sYzoRnmqmxZm4Whf/conceptual-problems-with-udt-and-policy-selection), only more- or less- updatefull strategies. * I gave the beliefs←territory example. We can also think about the values→territory case: when the world differs from our preferences, we change the world, not our preferences. This has to do with avoiding wireheading. * Similarly, we can think of examples of corrigibility -- such as respecting an off button, or avoiding manipulating the humans -- as partial agency. * Causal decision theory is more "partial" and evidential decision theory is less so: EDT wants to recognize more things as legitimate channels of influence, while CDT claims they're not. Keep in mind that [the math of causal intervention is closely related to the math which tells us about whether an agent wants to manipulate a certain variable](https://arxiv.org/abs/1902.09980) -- so there's a close relationship between CDT-vs-EDT and wireheading/corrigibility. I think people often take a pro- or anti- partial agency position: if you are trying to one-box in Newcomblike problems, trying to cooperate in prisoner's dilemma, trying to define logical updatelessness, trying for superrationality in arbitrary games, etc... you are generally trying to remove barriers to full agency. On the other hand, if you're trying to avert instrumental incentives, make sure an agent allows you to change its values, or doesn't prevent you from pressing an off button, or doesn't manipulate human values, etc... you're generally trying to add barriers to full agency. I've historically been more interested in dropping barriers to full agency. I think this is partially because I tend to assume that full agency is what to expect in the long run, IE, "all agents want to be full agents" -- evolutionarily, philosophically, etc. Full agency should result from instrumental convergence. Attempts to engineer partial agency for specific purposes feel like fighting against this immense pressure toward full agency; I tend to assume they'll fail. As a result, I tend to think about AI alignment research as (1) needing to understand full agency much better, (2) needing to mainly think in terms of aligning full agency, rather than averting risks through partial agency. However, in contrast to this historical view of mine, I want to make a few observations: * Partial agency sometimes seems like *exactly what we want,* as in the case of map←territory optimization, rather than a crude hack which artificially limits things. * Indeed, partial agency of this kind seems *fundamental to full agency.* * Partial agency seems *ubiquitous in nature*. Why should I treat full agency as the default? So, let's set aside pro/con positions for a while. What I'm interested in at the moment is ***the descriptive study of partial agency as a phenomenon.*** I think this is an organizing phenomenon behind a lot of stuff I think about. **The partial agency paradox** is: why do we see partial agency naturally arising in certain contexts? Why are agents (so often) myopic? Why have a notion of "truth" which is about map←territory fit but not the other way around? Partial agency is a weird thing. I understand what it means to optimize something. I understand how a [selection process](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control) can arise in the world (evolution, markets, machine learning, etc), which drives things toward maximization of some function. Partial optimization is a comparatively weird thing. Even if we can set up a "partial selection process" which incentivises maximization through only some channels, wouldn't it be blind to the side-channels, and so unable to enforce partiality in the long-term? Can't someone always come along and do better via full agency, no matter how our incentives are set up? Of course, I've already said enough to suggest a resolution to this puzzle. **My tentative resolution to the paradox** is: you don't build "partial optimizers" by taking a full optimizer and trying to add carefully balanced incentives to create indifference about optimizing through a specific channel, or anything like that. (Indifference at the level of the selection process does not lead to indifference at the level of the agents evolved by that selection process.) Rather, *partial agency is what selection processes incentivize by default*. If there's a learning-theoretic setup which incentivizes the development of "full agency" (whatever that even means, really!) I don't know what it is yet. Why? Learning is basically episodic. In order to learn, you (sort of) need to do the same thing over and over, and get feedback. Reinforcement learning tends to assume [ergodic](https://en.wikipedia.org/wiki/Ergodic_process) environments so that, no matter how badly the agent messes up, it eventually re-enters the same state so it can try again -- this is a "soft" episode boundary. Similarly, RL tends to require temporal discounting -- this also creates a soft episode boundary, because things far enough in the future matter so little that they can be thought of as "a different episode". So, like map←territory learning (that is, epistemic learning), we can kind of expect any type of learning to be myopic to some extent. This fits the picture where full agency is an idealization which doesn't really make sense on close examination, and partial agency is the more real phenomenon. However, this is absolutely not a conjecture on my part that all learning algorithms produce partial agents of some kind rather than full agents. There may still be frameworks which allow us to approach full agency in the limit, such as [taking the limit of diminishing discount factors](https://www.lesswrong.com/posts/S3W4Xrmp6AL7nxRHd/formalising-decision-theory-is-hard#H45aYhMFED4Byr72m), or [considering asymptotic behavior of agents who are able to make precommitments](https://www.lesswrong.com/posts/S3W4Xrmp6AL7nxRHd/formalising-decision-theory-is-hard#X7R4rxHpkEKycJvSb). We may be able to achieve some aspects of full agency, [such as superrationality in games](https://www.lesswrong.com/posts/S3W4Xrmp6AL7nxRHd/formalising-decision-theory-is-hard#3yw2udyFfvnRC8Btr), without others. Again, though, my interest here is more to understand what's going on. The point is that it's actually really easy to set up incentives for partial agency, and not so easy to set up incentives for full agency. So it makes sense that the world is full of partial agency. Some questions: * To what extent is it really true that settings such as supervised learning disincentivize strategic manipulation of the data? Can my argument be formalized? * If thinking about "optimizing a function" is too coarse-grained (a supervised learner doesn't exactly minimize prediction error, for example), what's the best way to revise our concepts so that partial agency becomes obvious rather than counterintuitive? * Are there better ways of characterizing the partiality of partial agents? Does myopia cover all cases (so that we can understand things in terms of time-preference), or do we need the more structured stop-gradient formulation in general? Or perhaps a more causal-diagram-ish notion, as my "directionality" intuition suggests? Do the different ways of viewing things have nice relationships to each other? * Should we view partial agents as multiagent systems? I've characterized it in terms of something resembling game-theoretic equilibrium. The 'partial' optimization of a function arises from the [price of anarchy](https://en.wikipedia.org/wiki/Price_of_anarchy), or as it's known around lesswrong, Moloch. Are partial agents really bags of full agents keeping each other down? This seems a little true, to me, but also doesn't strike me as the most useful way of thinking about partial agents. For one thing, it takes full agents as a necessary concept to build up partial agents, which seems wrong to me. * What's the relationship between the selection process (learning process, market, ...) and the type of partial agents incentivised by it? If we think in terms of myopia: given a type of myopia, can we design a training procedure which tracks or doesn't track the relevant strategic influences? If we think in terms of stop-gradients: we can take "stop-gradient" literally and stop there, but I suspect there is more to be said about designing training procedures which disincentivize the strategic use of specified paths of influence. If we think in terms of directionality: how do we get from the abstract "change the map to match the territory" to the concrete details of supervised learning? * What does partial agency say about inner optimizers, if anything? * What does partial agency say about corrigibility? My hope is that there's a version of corrigibility which is a perfect fit in the same way that map←territory optimization seems like a perfect fit. Ultimately, the concept of "partial agency" is probably confused. The partial/full clustering is very crude. For example, it doesn't make sense to think of a non-wireheading agent as "partial" because of its refusal to wirehead. And it might be odd to consider a myopic agent as "partial" -- it's just a time-preference, nothing special. However, I do think I'm pointing at a phenomenon here, which I'd like to understand better.
5cf72a73-d310-4023-824b-5e53e4b96b77
trentmkelly/LessWrong-43k
LessWrong
Convexity and truth-seeking A putative new idea for AI control; index here. This post starts with a very simple and retrospectively obvious observation: If we want an AI to give us an estimation of expected utility, it needs to be motivated to give us that estimation. Once we have that in mind, and remember that any extra motivation involves trade-offs, the points of the previous posts on truth-seeking become clearer. ---------------------------------------- Convexity and AI-chosen outputs Let u be a utility known to range within R⊆R. Let f be a twice differentiable function that is convex on R. For simplicity, we'll strengthen convexity to requiring f′′ be strictly positive on R. Then define: * u#(r)=f(r)+(u−r)f′(r), where r∈R is the output of the AI at some future time t. For this post, we'll assume that r is not known by anyone but the AI (in future posts we'll look more carefully at allowing r to be known to us). Differentiating u# with respect to r gives: * f′(r)+(u−r)f′′(r)−f′(r)=f′′(r)(u−r). The expectation of this is zero iff r=Et(u). If we make that choice, notice that the expression is twice differentiable at r=Et(u) (even if f′′′ is not defined there!) and its derivative is simply −f′′(r), which is negative on R. Thus choosing r=Et(u) will maximise the AI's utility on R. How much utility will the AI get? Since r will be set to the expectation of u at time t, clearly u#(r) will give utility f(Et(u)). At time 0, the AI's expectation of u# is therefore E0(f(Et(u)). If f were affine, this would simplify to f(E0(u)); but f is specifically not affine. Since f is convex, knowing more about the expected value of u can only (expect to) increase the expectation of u#. The AI values information. Cost and truth-seeking Consider first the function f(r)=r2 with R=[0,1]. That can be graphed as follows: Here, we're imagining that E(u)=0.6, if the AI were a pure u-maximiser. In that situation, the expectation of u# is at least 0.36. Because of the convexity of f, however, an expe
a04e7f07-4c1f-46c4-a9d2-ed84f7df241e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Nuclear Espionage and AI Governance ABSTRACT: ========= *Using both primary and secondary sources, I discuss the role of espionage in early nuclear history. Nuclear weapons are analogous to AI in many ways, so this period may hold lessons for AI governance. Nuclear spies successfully transferred information about the plutonium implosion  bomb design and the enrichment of fissile material. Spies were mostly ideologically motivated. Counterintelligence was hampered by its fragmentation across multiple agencies and its inability to be choosy about talent used on the most important military research program in the largest war in human history. Furthermore, the Manhattan Project’s leadership prioritized avoiding domestic political oversight over preventing espionage. Nuclear espionage most likely sped up Soviet nuclear weapons development, but the Soviet Union would have been capable of developing nuclear weapons within a few years without spying. The slight gain in speed due to spying may nevertheless have been strategically significant.* *Based on my study of nuclear espionage, I offer some tentative lessons for AI governance:* * *The importance of spying to transformative AI development is likely to be greater if the scaling hypothesis is false than if it is true.* * *Regardless of the course that AI technology takes, spies may be able to convey information about engineering or tacit knowledge (although more creativity will be required to transfer tacit than explicit knowledge).* * *Nationalism as well as ideas particularly prevalent among AI scientists (including belief in the open source ideal) may serve as motives for future AI spies. Spies might also be financially motivated, given that AI development mostly happens in the private sector (at least for now) where penalties for spying are lower and financial motivations are in general more important.* * *One model of technological races suggests that safety is best served by the leading project having a large lead, and therefore being secure enough in its position to expend resources on safety. Spies are likely, all else equal, to decrease the lead of the  leader in a technological race. Spies are also likely to increase enmity between competitors, which seems to increase accident risk robustly to changes in circumstances and modeling assumptions. Therefore, it may make sense for those who are concerned about AI safety to take steps to oppose espionage**—**even if they have no preference for the labs being harmed by espionage over the labs benefiting from espionage.* * *On the other hand, secrecy (the most obvious way to prevent espionage) may increase risks posed by AI by making AI systems more opaque. And countermeasures to espionage that drive scientists out of conscientious projects may have perverse consequences.* Acknowledgements: I am grateful to Matthew Gentzel for supervising this project and Michael Aird, Christina Barta, Daniel Filan, Aaron Gertler, Sidney Hough, Nat Kozak, Jeffery Ohl, and Waqar Zaidi for providing comments. This research was supported by a fellowship from the Stanford Existential Risks Initiative.  This post is a short version of the report, x-posted from [EA Forum](https://forum.effectivealtruism.org/posts/CKfHDw5Lmoo6jahZD/nuclear-espionage-and-ai-governance-1). The full version with additional sections, an appendix, and a bibliography, is available [here](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit?usp=sharing).  ![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/a767129ace7cef3f323962396367c04d4c7d10b592f12cd5.png)1. Introduction =============== The early history of nuclear weapons is in many ways similar to hypothesized future strategic situations involving advanced artificial intelligence ([Zaidi and Dafoe 2021](https://web.archive.org/web/20210325035303/https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf), 4). And, in addition to the objective similarity of the situations, the situations may be made more similar by deliberate imitation of the Manhattan Project experience ([see this report](https://armedservices.house.gov/_cache/files/2/6/26129500-d208-47ba-a9f7-25a8f82828b0/6D5C75605DE8DDF0013712923B4388D7.future-of-defense-task-force-report.pdf#page=13) [to the US House Armed Service Committee](https://web.archive.org/web/20201008094452/https://armedservices.house.gov/_cache/files/2/6/26129500-d208-47ba-a9f7-25a8f82828b0/6D5C75605DE8DDF0013712923B4388D7.future-of-defense-task-force-report.pdf)). So it is worth looking to the history of nuclear espionage for inductive evidence and conceptual problems relevant to AI development.  The Americans produced a detailed official history and explanation of the Manhattan Project, entitled the Smyth Report, and released it on August 11, 1945, five days after they dropped the first nuclear bomb on Japan ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 126). For the Soviets, the Smyth Report “candidly revealed the scale of the effort and the sheer quantity of resources, and also hinted at some of the paths that might work and, by omission, some that probably would not” ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 103). While it would not have allowed for copying the Manhattan Project in every detail, the Soviets were able to use the Smyth Report as “a general guide to the methods of isotope separation, as a checklist of problems that needed to be solved to make separation work, and as a primer in nuclear engineering for the thousands upon thousands of engineers and workers who were drafted into the project” ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 104).  There were several reasons that the Smyth Report was released. One was a belief that, in a democratic country, the public ought to know about such an important matter as nuclear weapons. Another reason was a feeling that the Soviets would likely be able to get most of the information in the Smyth Report fairly easily regardless of whether it was released. Finally, releasing a single report would clearly demarcate information that was disseminable from information that was controlled, thereby stemming the tide of disclosures coming from investigative journalists and the tens of thousands of former Manhattan Project employees ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 124-125). Those leaks would not be subject to strategic omission, and might, according to General Leslie Groves (Director of the Manhattan Project) “start a scientific battle which would end up in congress” (Quoted in [Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 125). The historian Michael Gordin summarized the general state of debate between proponents and opponents of nuclear secrecy in the U.S. federal government in the late 1940s as follows: > How was such disagreement possible? How could Groves, universally acknowledged as tremendously security-conscious, have let so much information, and such damaging information, go?... The difference lay in what Groves and his opponents considered to be useful for building an atomic bomb. Groves emphasized the most technical, most advanced secrets, while his opponents stressed the time-saving utility of knowing the general outlines of the American program ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 93). > > In Gordin's view,  "in the context of the late 1940s, his [Groves's] critics were more right than wrong" ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 93), though it is important to note that the Smyth Report's usefulness was complemented by the extent of KGB spying of which neither Groves nor his critics were yet aware. Stalin decided to imitate the American path to the nuclear bomb as closely as possible because he believed that it would be both the “fastest” and the “most reliable” (Quoted in [Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 152-153). The Smyth Report (and other publicly available materials on nuclear weapons) contained strategic omissions. The Soviets used copious information gathered by spies to fill in some of the gaps. 2. Types of information stolen ============================== 2.1 Highly abstract engineering: bomb designs --------------------------------------------- Bomb designs were one of the most important categories of information transferred by espionage. To illustrate why design transfer was so important, it is necessary to review some basic principles of nuclear weaponry (most of what follows on nuclear weapons design is adapted from a [2017 talk by Matt Bunn](https://www.youtube.com/watch?v=jqLbcNpeBaw)).  Fission weapons work by concentrating a critical mass of fissile material. A critical mass is enough fissile material to start a nuclear chain reaction. A critical mass by itself, however, is not a viable nuclear weapon because it will heat up dramatically, turn into gas, expand in volume, and cease to constitute a critical mass, thereby stopping the chain reaction before it has had a chance to consume most of the fuel. The simplest possible nuclear bomb, a gun type design, works by launching a shell of highly enriched uranium-235 into another piece of highly enriched uranium-235. Neither piece of uranium-235 is critical by itself, but together they amount to a critical mass. The tamper prevents the critical mass from expanding out into a diffuse cloud of gas. A massive amount of heat is released, turning the fissile material to gas. The temperature rises to that of the core of the sun. In a gas, a rise in temperature causes a corresponding increase in pressure. This leads to a massive increase in pressure, and an extremely energetic explosion. The bomb dropped on Hiroshima, Little Boy, was a gun type bomb. ![](https://lh3.googleusercontent.com/JlXW2MMbecykbUAjD_MjktMwQaCGosXtf3m30xbaRe2x6XZ1yrxOX0cqNAZQqzkcx6nLKy72k5RwHDpEWo_xppbmDnpWU11c-crKFU6S23AaF0MAFd-9SB-rozwtLgVmD58s8rQ0=s0)[Gun type bomb design](https://commons.wikimedia.org/wiki/File:Gun-type_Nuclear_weapon.png)The amount of fissile material required to achieve critical mass decreases with density squared. So compressing one’s fissile material means one gets more explosive power for the same amount of fuel. This is the key to the more advanced plutonium implosion bomb design, which was used for the Fat Man bomb dropped on Nagasaki. A plutonium implosion bomb has a core of fissionable plutonium surrounded by a tamper in the middle and, at the top layer, a chemical explosive. The explosive detonates, pushing the tamper in towards the core, which begins a nuclear chain reaction. This design uses plutonium-239, which is easier to obtain than the uranium-235 used in a gun type bomb.  ![](https://lh4.googleusercontent.com/02_K6pUoDdNo-G7PtovWHs2ykVJsUUlAsTm8Vo2-7xI63kvypFekAC4zJIGREbNfkBCRDMuJst8Ec0YTeRGVZQo2R077LwS2Wi1VDFHDfmxfSpcj8vS-0oSZinYY5lETyX4TKzl6=s0)[Plutonium implosion bomb design](https://en.wikipedia.org/wiki/Nuclear_weapon_design#/media/File:Implosion_bomb_animated.gif)The first Soviet nuclear test was not of the relatively simple gun type. Instead it was a far more complex plutonium implosion assembly. The Soviets received the American plutonium implosion design twice, from two spies, and copied it for their first nuclear bomb ([Holloway 1994](https://www.google.co.uk/books/edition/Stalin_and_the_Bomb/Yu6ODwAAQBAJ?hl=en&gbpv=0), 366; [Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 117, 119).  Having two sources for the design gave the Soviets confidence that the design would work and was not FBI disinformation, no small thing given that the leaders of the Soviet nuclear weapons effort had reason to believe they would be executed if the first test failed ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 171; [Holloway 1994](https://www.google.co.uk/books/edition/Stalin_and_the_Bomb/Yu6ODwAAQBAJ?hl=en&gbpv=0), 218). Furthermore, the Soviets were hard pressed to separate enough uranium-235 from the more plentiful uranium-238 to make a gun type uranium bomb work (gun type plutonium bombs are not viable). This was because the Western Allies had taken pains to corner the world supply of high quality uranium ore. The low quality ore that the Soviets had was adequate to the task of breeding plutonium, but it would have been more expensive and slower for the Soviets to separate enough uranium-235 enough to build a gun type bomb ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 149-151). Often, controlling material and controlling information are thought of as different strategies for preventing nuclear proliferation. But in the first years after the creation of the atomic bomb, the West’s failure to control information about nuclear weapons design undermined its strategy of controlling fissile material to prevent nuclear proliferation. 2.2 Less abstract engineering ----------------------------- Most of the effort expended during the Manhattan Project went into the enrichment of fissile material. Ted Hall provided information about methods of uranium isotope (“25” in KGB code) separation, as reported in a decrypted cable sent from New York Station to Moscow Center on May 26, 1945:   ![](https://lh6.googleusercontent.com/F3s_EBCUwuVoQthsM3Vuvk-ulZCyh2LUIgGDlXZ4-0SkEMVvbv4Icl6Nmk-q23dDbKP2BgyFNAxsm5RfLMtj9q9FsJL5-Zcdt2Oc6ZAATpPlKTIPvJndPYVkELXGu8C-S0ZCadJl=s0)[KGB cable about Ted Hall](https://web.archive.org/web/20190923060419/https://www.nsa.gov/Portals/70/documents/news-features/declassified-documents/venona/dated/1945/26may_mlad2.pdf)Fuchs similarly provided data about electromagnetic techniques of isotope separation for uranium-235 (“ENORMOZ” in KGB code ordinarily referred to the Manhattan Project as a whole, but in this case it meant uranium-235 specifically), which was reported in a decrypted cable from Moscow to New York of April 10, 1945. ![](https://lh4.googleusercontent.com/KLsfapypM73ZXDnyH7dpm0yKjm05r4tIruAu4TGy5Idav9XHfR0qWMdJBIVgvZHhWRbtFyd00rRza01eySIBMnmJpggrGkXvtaCbss0fYTGmqGEa7gxZL4ynpxvgEe8bZbR_kdyo=s0)[KGB cable about Klaus Fuchs](https://web.archive.org/web/20190923055510/https://www.nsa.gov/Portals/70/documents/news-features/declassified-documents/venona/dated/1945/10apr_atomic_bomb_info.pdf)In addition to technical reports on enriching fissile material from Fuchs and Hall, the Soviets had plant designs for the Oak Ridge facility from Russell McNutt, data on plutonium from an unidentified spy, and data on the Chalk River facility in Canada’s nuclear reactor from Alan Nunn May, ([see the appendix of the full report for a list of Manhattan Project spies](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit#heading=h.gppzuj2ox6yg)). The Soviets were also occasionally able to acquire physical samples from spies. They received 162 micrograms of uranium-235 from Alan Nunn May, and David Greenglass “provided the Soviets with a physical sample of part of the triggering mechanism [of a plutonium bomb]” ([Klehr and Haynes 2019](https://web.archive.org/web/20191124174354/https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol-63-no-3/pdfs/Fourth-Soviet-Spy-LosAlamos.pdf), 12). 2.3 Types of information and the AI case ---------------------------------------- To the extent that the information that the most advanced AI projects have that their closest competitors lack is highly abstract and easy to convey, the potential significance of spying is very large. Simple, abstract ideas (analogous to basic principles of bomb design in the nuclear case) are the easiest to transfer. The question of how important theoretical breakthroughs will be to the future development of AI is closely related to the debate over the scaling hypothesis. The scaling hypothesis holds that current techniques are sufficient to eventually produce transformative artificial intelligence (TAI) if the neural networks are just made large enough ([Branwen 2020](https://web.archive.org/web/20210717005058/https://www.gwern.net/Scaling-hypothesis); for an explanation of the idea of TAI see [Karnofsky 2016](https://web.archive.org/web/20210122012058/https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1)). The reason that TAI does not yet exist, per the scaling hypothesis, is that the hardware and the will to invest in scaling does not yet exist ([Branwen 2020](https://web.archive.org/web/20210717005058/https://www.gwern.net/Scaling-hypothesis)). To the extent that this is true, it seems that stealing highly abstract ideas about AI algorithms is unlikely to make much of an impact, and that there is unlikely to be an algorithmic analog of the plutonium implosion bomb design. On the other hand, abstract ideas about data types, data processing, or assembling the requisite computing power might be transferred by spies to great effect.     Spies transferred about 10,000 pages of technical material on nuclear weapons from the Manhattan Project to the Soviet Union ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 60). At that level of volume, one can convey information about engineering that is concrete and detailed rather than abstract and simple, such as the design of machinery and facilities used for the separation of uranium isotopes. Even devoted proponents of the scaling hypothesis acknowledge that when replicating an effort based on scaling up existing techniques, one should “never underestimate the amount of tweaking and special sauce it takes” ([Branwen 2020](https://web.archive.org/web/20210717005058/https://www.gwern.net/Scaling-hypothesis)).  But just how significant is engineering knowledge of an intermediate level of abstraction likely to be as a bottleneck on AI capabilities? Unlike the Manhattan Project, advanced AI does not obviously require a massive industrial effort to purify rare materials. However, if significant AI research begins to be conducted by governments and international trade in computer chips becomes more restricted, the ability to solve engineering problems in the physical world might again come to differentiate the leading nation from its closest competitors. In such a regime, spying on the details of electrical engineering, materials science, and industrial processes might prove important (see [Khan and Mann 2020](https://web.archive.org/web/20210531050703/https://cset.georgetown.edu/publication/ai-chips-what-they-are-and-why-they-matter/)). The Anglo-American effort to prevent nuclear proliferation by cornering the world supply of uranium (discussed in section 2.1 above) might have been far more effective but for the Soviet’s use of espionage data on the plutonium route to the bomb. Similarly, strategies to restrict AI proliferation that rely on restricting information, and strategies that rely on restricting access to physical materials (in the AI case rare earth metals, chips, and semiconductor manufacturing equipment rather than high quality uranium ore) might be mutually reinforcing. Tacit knowledge seems to play an important role in AI research. Knowing what sorts of training data to select for a model might involve tacit knowledge. More significantly, knowing which research directions are likely to be promising is a key element of AI research (or any other kind of research), and such knowledge includes an important tacit dimension. In a discussion of what one learns in a computer science PhD program, Andrej Karpathy explained the importance of the tacit knowledge embedded in “taste” to computer science research: > When it comes to choosing problems you’ll hear academics talk about a mystical sense of “taste”. It’s a real thing. When you pitch a potential problem to your adviser you’ll either see their face contort, their eyes rolling, and their attention drift, or you’ll sense the excitement in their eyes as they contemplate the uncharted territory ripe for exploration. In that split second a lot happens: an evaluation of the problem’s importance, difficulty, its sexiness, its historical context (and possibly also its fit to their active grants). In other words, your adviser is likely to be a master of the outer loop and will have a highly developed sense of taste for problems. During your PhD you’ll get to acquire this sense yourself ([Karpathy 2016](https://web.archive.org/web/20210625204713/http://karpathy.github.io/2016/09/07/phd/)). > > Research taste cannot easily be transferred by espionage. It might be possible to formalize certain aspects of research taste, or to accelerate the process of learning about it implicitly by mimicking the experience of training under a senior researcher. How much better is the taste of the best researchers on the most advanced AI project likely to be than the taste of the second-best researchers on the second best AI project? Rohin Shah reports that advanced computer science PhD students at UC Berkeley have much better research taste than beginning PhD students, and that professors have better taste than advanced PhD students ([Shah 2020](https://web.archive.org/web/20210428081910/https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/)). Is there a similar asymmetry in taste between the very best researchers in the world and their close competitors? This seems like a promising question for further study but, provisionally: Michael Polanyi**—**the philosopher whose work brought about the current focus on tacit knowledge in the history of science and technology**—**believed that the greatness of a discovery was proportional to the amount of tacit knowledge required to select the problem that led to it (Polanyi [1966] 2009, 23). The more that taste and other forms of tacit knowledge distinguish the leading AI projects from less advanced ones, the more difficult it will be for spies to significantly help the laggards catch up. Spies could work to transfer personnel from the leader to the laggards as a way of transferring tacit knowledge. But this would duplicate the [issues with trust that limited the usefulness of Soviet spies who were exfiltrated such as George Koval and Oscar Seborer](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit#heading=h.rd2epngoenoy). Alternatively, spies might try some scheme of rendering tacit knowledge explicit. 3. Motivations for espionage ============================ 3.1 Klaus Fuchs: ideology and conscience ---------------------------------------- Klaus Fuchs was (along with Ted Hall) one of the two most important spies in the Manhattan Project. He was a theoretical physicist. Fuchs took refuge in England after the Nazis came to power in Germany because his history as a Communist Party activist made him a target of the Gestapo. While in England, Fuchs began to work on nuclear weapons research and informed a German Communist Party leader that he had information that might be of interest to Soviet intelligence. Fuchs was sent to America to work as a nuclear physicist on the Manhattan Project, and continued to spy for the U.S.S.R. ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 92-93).  Fuch’s sister Kristel Heineman helped him on several occasions to make contact with his KGB courier in America, Harry Gold ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 95). Fuchs’s initial involvement in spying was clearly ideologically motivated. He later accepted money from the KGB. Fuchs claimed to his KGB courier that he did so to prove his loyalty to the Soviet Union, because he had been told that offering payment was a KGB strategy used to “morally bind” other spies to keep helping the KGB ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 128).  ![](https://upload.wikimedia.org/wikipedia/commons/c/c7/Klaus_Fuchs_-_police_photograph.jpg)[Klaus Fuchs](https://en.wikipedia.org/wiki/Klaus_Fuchs#/media/File:Klaus_Fuchs_-_police_photograph.jpg)In 1949, British and American intelligence discovered Fuchs by decrypting KGB cables as part of the Venona counterintelligence project and correlating the covernames "Charles" and "Rest" with known facts about Fuchs’s background and whereabouts ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 193-228). By that time, Fuchs was back in England and working for the British nuclear weapons lab at Harwell. MI5 investigator James Skardon approached Fuchs and said that MI5 was certain Fuchs had been spying, but did not disclose it knew: “Skardon… suggested that FUCHS had been passing information to the Russians.... Skardon then took him very carefully over the ground during the period when he [Fuchs] was in America... and said that if it was not FUCHS it ‘could only be his twin brother’” ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0) 239-240). Skardon repeatedly led Fuchs to believe he could keep his job at Harwell if he confessed ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 239, 259-260). At first Fuchs denied it, but after several interviews, he confessed to spying ([Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 257-258).  Later, Fuchs gave a written confession. The ideological motivations given in that confession were as follows:  Fuchs’s father always emphasized to him the importance of following his conscience. In university, Fuchs started out as a social democrat, but switched to the Communist Party after what he saw as the social democrat’s failure to effectively oppose the rise of Hitler ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 182-183). While working as a Communist Party activist, he began to feel that he should subordinate his personal conscience and ideas about decency to party discipline ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 183). In his confession, he reported a kind of inward compartmentalization, allowing one part of himself to be at ease with his fellow scientists and another part to spy on them.   In Fuchs’s confession, he claimed to have come to reject his former beliefs that 1. standards of personal decency had to be suspended for political reasons 2. one should subordinate one's thoughts to the Party and 3. the Marxist theory of freedom through the mastery of the blind forces that control society could be put into practice in an individual's life by skillful manipulation of his own environment, including that part of his environment composed of the people around him ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 185-186). Fuchs claimed his newly re-awakened conscience required him to stop working with the KGB early in 1949 and to turn himself in 1950 in order to spare his friends at Harwell from the suspicion that would be cast on them by ambiguity about who the spy in the British nuclear weapons program was ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 185-186). His confession shows that he continued to believe he would be allowed to remain at Harwell ([Fuchs [1950] 1989](https://www.google.co.uk/books/edition/Klaus_Fuchs_Atom_Spy/x0VS48NOSgEC?hl=en&gbpv=0&bsq=atomic%20spy%20chadwell), 185). The primary source evidence is potentially consistent with ideological disillusionment serving as one factor motivating Fuchs’s decisions to stop meeting with his KGB courier in early 1949 (although this also might also have been due to Fuchs somehow discovering that he was being investigated, see [Greenspan 2020](https://www.google.co.uk/books/edition/Atomic_Spy/OuO3DwAAQBAJ?hl=en&gbpv=0), 271-284). Remarkably, Fuchs told a similar story of ideological development (but with a different valence) when he met with KGB agents in a Moscow restaurant after his release from British prison and relocation to East Germany. Fuchs told the agents that he had been unduly influenced by bourgeois ideology, but that he had since corrected himself ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 134-135).  3.2 Ted Hall: ideology and great power balancing ------------------------------------------------ Ted Hall was the youngest physicist working on the Manhattan Project. He graduated from Harvard at 18. Hall was a communist, and had been active as a labor organizer while in college ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 110-112). In 1944, at age 19, he approached a representative of the Soviet Union in New York and offered to serve as a spy. His explanation of his motivations for giving the U.S.S.R. information about American nuclear weapons research is recorded in former KGB agent Alexander Vassiliev’s notes on the KGB’s archives, [which have been translated into English and are hosted on the Wilson Center’s website](https://digitalarchive.wilsoncenter.org/collection/86/vassiliev-notebooks):  > The S.U. [Soviet Union] is the only country that could be trusted with such a terrible thing. But since we cannot take it away from other countries—the U.S.S.R. ought to be aware of its existence and stay abreast of the progress of experiments and construction. This way, at a peace conference, the USSR—on which the fate of my generation depends—will not find itself in the position of a power subjected to blackmail ([Vassiliev, Yellow Notebook #1](https://web.archive.org/web/20170811015011/https://digitalarchive.wilsoncenter.org/document/112856), 21). > > Although Hall would later claim that he had originally set out only to inform the Soviet Union of the fact that the United States was developing nuclear weapons ([Hall [1995] 1997](https://www.google.co.uk/books/edition/Bombshell/xqjeAAAAMAAJ?hl=en&gbpv=0&bsq=bombshell%20ted%20hall), 288), that claim would seem to be belied by his statement that the "U.S.S.R. ought to... stay abreast of the progress of experiments and construction." Decrypted Venona cables revealed Hall’s status as a Soviet spy to American intelligence services after the war. However, Hall, unlike Fuchs, did not confess when questioned. Unwilling to reveal its access to secret Soviet communications, and unable to admit secret evidence in court, the U.S. government let Hall go ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 123-124). After his spying was revealed by the declassification of the Venona cables in 1995, Hall admitted to having been a Soviet spy: > It has even been alleged that I “changed the course of history.” Maybe the “course of history,” if unchanged, would have led to atomic war in the past fifty years—for example the bomb might have been dropped on China in 1949 or the early fifties. Well, if I helped to prevent that, I accept the charge. But such talk is purely hypothetical. Looking at the real world we see that it passed through a very perilous period of imbalance, to reach the existing slightly less perilous phase of “MAD” (mutually assured destruction) ([Hall [1995] 1997](https://www.google.co.uk/books/edition/Bombshell/xqjeAAAAMAAJ?hl=en&gbpv=0&bsq=bombshell%20ted%20hall), 288). > > Hall’s two justifications, more than fifty years apart, both focused on the international balance of power. 3.3 Reflections on nuclear spy motivations ------------------------------------------ Ideology was by far the biggest motivation for Manhattan Project spies. Financial motivations were less important than ideological motivations, probably because penalties for spying could include decades in prison or death. When the stakes are very high, spying requires a certain kind of altruism, as narrowly self-interested motivations are unlikely to be able to overcome fear of the penalties if one is caught. It is also striking how many spies (Klaus Fuchs, David Greenglass, Oscar Seborer) were helped by members of their families in their espionage. Family loyalties might have served to prevent spies from desisting from spying (although Greenglass overcame this obstacle when he testified against his sister and brother-in-law, sending them to the electric chair). Another factor, in addition to family loyalties, that served to make it easier to start spying for the Soviet Union than to stop was the KGB practice of paying spies even if they were originally ideologically motivated. Receiving payment from the KGB removed any possible ambiguity about what the spies were doing and increased expected penalties, reducing the odds that spies would confess. 3.4 Possible AI spy motivations ------------------------------- The Soviet Union was in an unusual position in the 1930s and 1940s. Its governing ideology commanded a significant following among educated people all over the world. This made it much easier to recruit spies. Unlike socialist internationalist loyalty to the Soviet Union, nationalism continues to be widespread and might motivate AI spying. This is true even of spying in the private sector, as spies might believe that by helping firms based in their homelands they are doing their patriotic duty. The most significant nuclear spy outside of the Manhattan Project, A. Q. Khan, was motivated by Pakistani nationalism. While security clearance investigations try to detect foreign loyalties, nothing like the security clearance system exists in the private sector. Furthermore, nation-states might force their otherwise unwilling nationals or firms to help with AI espionage. However, this issue must be treated with extreme care. There is an obvious risk of xenophobic or racist bias. Furthermore, there is a risk that attempting to prevent espionage by restricting the access to sensitive information of those with potential conflicts of national loyalties will, pragmatically in addition to morally, backfire. [During the Cold War, the United States deported a Chinese-born aerospace engineer, Qian Xuesen based on unproven allegations that he was a spy](https://www.bbc.com/news/stories-54695598). Qian went on to build missile systems for the People’s Republic of China. In addition to ideas that are widely popular (such as nationalism), ideas that are common among software engineers and computer scientists but rarer in the general population might prove significant as motivations for AI espionage. Belief in the open source or free software ideal, which opposes secrecy in software development, is one obvious example.  Despite the potential motivating force of American nationalism as an ideology for spies, it seems doubtful that the U.S. government or U.S. firms will be net beneficiaries of AI espionage if competition is most intense between countries (if an AI arms race is undertaken largely between U.S. firms, then some U.S. firms may well be net beneficiaries). Spying can help lagging participants in a race to develop new technologies catch up, but it is hard to see how it can help the leader improve its lead (unless the overall leader is behind in certain specific areas). The United States appears to be ahead of the rest of the world in AI, with China being its only plausible close competitor. One recent analysis broke down AI capabilities into four drivers: hardware; research and algorithms; data; and size of commercial AI sector. The United States led China by a wide margin in every category except for data ([Ding 2018](http://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf#page=29), 29). The most important AI research today is conducted in the private sector. Unless that changes, the most important spying will have to be done on private firms. This changes the balance of motivations that might prove significant. Most obviously, given that most people approach their work with the goal of making money, it suggests that financial gain might be more significant as a motive for AI espionage than it was as a motive for nuclear espionage. Financially motivated public sector spies tend to be of lower quality than ideological spies because, given the legal penalties for spying, only irrational people or people in truly desperate need of money would agree to take on the requisite level of risk. But in the private sector, expected penalties are lower.  4. Manhattan Project counterintelligence ======================================== The historian Alex Wellerstein argues that counterintelligence efforts at the Manhattan Project had three main goals: 1. preventing Axis powers from spying 2. preventing wartime allies (such as the Soviet Union) from spying and 3. preventing scientists from getting a holistic understanding of the Manhattan Project, and (more importantly) preventing politicians and the broader American public from discovering the Manhattan Project's existence. Broadly, 1 and 3 were successful but 2 was not ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 91-92). It may be that bureaucratic incentives to focus on secrecy from domestic political actors drew energy away from preventing Soviet espionage. General Leslie Groves was particularly concerned about Congress getting wind of the massive budget of the Manhattan Project and cutting it off, or subjecting Manhattan Project leaders to onerous postwar investigations ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 81). During congressional hearings on atomic spying after the war, Groves “argued… that the Manhattan Project security apparatus had been primarily focused on preventing leaks and indiscretions, not rooting out disloyalty” ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 224-225).  ![](https://upload.wikimedia.org/wikipedia/commons/3/37/Leslie_Groves.jpg)[General Leslie Groves](https://en.wikipedia.org/wiki/Leslie_Groves#/media/File:Leslie_Groves.jpg)There were other reasons, besides Groves’s relative lack of interest in preventing Soviet spying, for the success of the Manhattan Project spies. Responsibility for detecting espionage was divided between two mutually hostile agencies, the FBI and army intelligence. And, most fundamentally, a significant portion of the world’s top scientific talent was sympathetic to the Soviet Union, which introduced a capability-alignment tradeoff ([Walsh 2009](https://web.archive.org/web/20210225083023/https://www.smithsonianmag.com/history/george-koval-atomic-spy-unmasked-125046223/)). 5. The significance of nuclear espionage ======================================== The Soviet Union detonated its first nuclear bomb on August 29, 1949, four years after the first successful American nuclear test. In *Stalin and the Bomb*, David Holloway evaluated the impact of nuclear espionage on Soviet nuclear weapons development as follows:  > The first Soviet atomic bomb was a copy of the American plutonium bomb tested at Alamogordo in July 1945. Espionage played a key role in the atomic Soviet project, [*sic*] and its role would have been even greater if the Soviet leaders had paid more heed to the intelligence they received during the war. *The best estimates suggest, however, that the Soviet Union could have built a bomb by 1951 or 1952 even without intelligence about the American bomb*. There already existed in the Soviet Union strong schools of physics and radiochemistry, as well as competent engineers. Soviet nuclear research in 1939-41 had gone a long way toward establishing the conditions for an explosive chain reaction. It was because Soviet nuclear scientists were so advanced that they were able to make good use of the information they received from Britain and the United States about the atomic bomb.... The nuclear project was a considerable achievement for Soviet science and engineering ([Holloway 1994](https://www.google.co.uk/books/edition/Stalin_and_the_Bomb/Yu6ODwAAQBAJ?hl=en&gbpv=0), 366, emphasis added). > > The empirical outline of Holloway’s account does not appear to be open to serious doubt. The Soviets made significant use of espionage data and, on the other hand,  Soviet scientists were world-class and could have developed the bomb within a few years of 1949 without espionage. Michael Gordin makes an interesting argument in *Red Cloud at Dawn*. The Soviets laboriously checked, re-checked, and adapted spy data. Given the effort that the Soviets had to go through to assure themselves of the veracity of the information that they got from spies, Gordin suggests that it is an open question whether the Soviets really saved *any* time by using spy data ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 153-154). Gordin concedes however that, even if the Soviets saved no time, they “surely saved much uncertainty” ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 153). Reducing uncertainty can change one’s strategy. If a country increases its confidence that it will soon have a powerful weapon hitherto monopolized by an enemy, it may become rational to behave more aggressively towards that enemy.  Ignoring the prospective effects of knowing (rather than merely guessing) that one will soon have a powerful weapon, saving uncertainty meant removing the chance that the Soviets were unlucky and would have had to wait longer to get nuclear weapons. Stalin himself did not believe that nuclear weapons were very strategically significant in and of themselves ([Gordin 2009](https://www.google.co.uk/books/edition/Red_Cloud_at_Dawn/Ztls8XXNiqgC?hl=en&gbpv=0), 62). He did, however, understand the enormous importance that the Americans assigned to nuclear weapons. Thus, he refused Kim Il Sung’s request to support a North Korean invasion of South Korea in 1948 because he feared an American intervention on the South Korean side. In 1950, however, Stalin was willing to support Kim’s invasion, in part because he believed that the Soviet Union’s nuclear weapons would deter American intervention ([Haynes, Klehr, and Vassiliev 2009](https://www.google.co.uk/books/edition/Spies/qCAVQ_cdomcC?hl=en&gbpv=0), 62). Therefore, it seems that even if one takes maximally unfavorable assumptions and assumes that espionage saved the Soviet Union no time and only uncertainty, without espionage there would have been a substantially greater chance that the Korean War would have been delayed or, because of the other changes made possible by delay, avoided.  Furthermore, I do not think maximally unfavorable assumptions about the efficacy of nuclear espionage are justified. Absent further argument, it seems to me that we should default to the view that it is easier to check data and designs that one has in hand than it is to derive entirely new data and designs. Holloway’s estimate that intelligence saved the Soviets two to three years seems to be a subjective guess rather than the output of a quantitative model of bomb timelines. However, given that Holloway undertook the most thorough study of the Soviet nuclear weapons program (at least in English), he should be afforded some (small) amount of epistemic deference. Given the basic facts of the case, the Soviets saving something in the neighborhood of two to three years is not hard to believe. Because of the importance of the Korean War, that ought to qualify as a significant impact on world history.  In addition to the impact of espionage on the development of nuclear weapons, nuclear espionage may also have raised the temperature of the Cold War. Even if we grant, as we should, that the Cold War would have occurred anyway, the discovery of Alan Nunn May’s nuclear spying in 1946 may have reduced the odds that control of nuclear weapons would be ceded to multilateral international institutions ([Zaidi and Dafoe 2021](https://web.archive.org/web/20210325035303/https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf), 23, 42, 42n179). The distrust engendered by nuclear espionage highlights the potential of spying to increase enmity between the leader and the laggards in a technological race, and to reduce the odds of cooperation aimed at mitigating the risks of such a race. This effect emerges from the inherent dynamics of espionage and is likely to apply to AI races as well as nuclear races. 6. Secrecy ========== Among people concerned about existential risk, there sometimes seems to be a presumption in favor of secrecy. One plausible origin for this presumption is the 2016 article [“The Unilateralist’s Curse and the Case for a Principle of Conformity”](https://web.archive.org/web/20210413160318/https://nickbostrom.com/papers/unilateralist.pdf) by Nick Bostrom, Thomas Douglas, and Anders Sandberg. Bostrom et al. argue that even a well-intentioned group of independent actors is likely to err in the direction of taking a risky action, because if one can act unilaterally the probability of action will be proportional not to the average of the group but to the probability that the most optimistic actor will act. Bostrom et al.’s proposed solution to the unilateralist's curse is a principle of conformity in situations where unilateralism is possible. When the action in question is publishing or not publishing some information, the principle of conformity is equivalent to a presumption in favor of secrecy. Note, though, that in “The Unilateralist’s Curse” Bostrom et al. do not argue for conformity all things considered. Rather, they argue that the unilateralist’s curse provides a defeasible reason for conformity. Their paper does not attempt to establish whether, in any given, situation our prior inclinations to conform or not to conform are correct. If one is concerned about the dissemination of information hazards, one should bear in mind that omissions might reveal as much as commissions in certain circumstances, and weigh carefully what strategy of releasing or withholding information is least hazardous ([Bostrom 2019](https://doi.org/10.1111/1758-5899.12718); [Bostrom 2011](https://web.archive.org/web/20210427231638/https://www.nickbostrom.com/information-hazards.pdf)).  One should also be concerned by the tendency of secrecy regimes to perpetuate themselves. Alex Wellerstein, explains this point of view > This is, perhaps, the real application for the history of nuclear secrecy to these fields: *once the controls come in, they don’t go away fast*, and they may not even work well to prevent the proliferation of technology. But they will do other kinds of work in their effort to partition the world into multiple parts: creating in-communities and out-communities, drawing scrutiny to those who practice in these arts, and monopolizing patrons. There may be good reasons for other scientific communities to embrace secrecy—if the information in question truly was unlikely to be independently discoverable, had potentially large negative applications relative to the possible positive applications, and could be effectively controlled, then it might be a candidate—but if they took my advice, they would think long and hard about what types of secrecy activities they wanted to adopt and how to make sure that their attempts at secrecy did not outstrip their other values ([Wellerstein 2021](https://www.google.co.uk/books/edition/Restricted_Data/LEU6EAAAQBAJ?hl=en&gbpv=0), 410, emphasis added). > > Many of the concerns Wellerstein raises seem rather remote from existential risk. This might lead researchers concerned with existential risk to assume that they have nothing to learn from the anti-secrecy perspective. I think that would be a mistake, because Wellerstein’s observation that regimes of secrecy tends to be self-perpetuating is highly relevant to existential risk. Secrecy serves to worsen our understanding of (and, therefore, our ability to control) emerging technologies. Secrecy may have had this effect in the early Cold War United States, where a large thermonuclear arsenal was accumulated alongside a failure to seriously study the catastrophic risks that thermonuclear war posed ([Gentzel 2018](https://theconsequentialist.wordpress.com/2018/02/24/lessons-from-the-cold-war-on-information-hazards-why-internal-communication-is-critical/)). If secrecy is hard to uproot, it might further raise existential risk by preventing concerns about safety from spreading to all relevant actors.  In “What Failure Looks Like,” the AI researcher Paul Christiano explains some reasons why AI may pose an existential risk. Those reasons all involve imperfectly understood AI systems whose goals diverge from those of human beings and which are able to gain power and influence in part because of their creators' imperfect understanding of the systems' true goals. Christiano anticipates that this problem will arise due to competitive incentives to deploy powerful AI systems as soon as possible combined with the inherent opacity of contemporary machine learning techniques ([Christiano 2019](https://web.archive.org/web/20190608113631/https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)). But secrecy about advanced AI might compound the problem of recognizing misaligned AI systems. And if  approaches to AI safety that rely on rendering AI systems interpretable prove essential to preventing misalignment, secrecy is likely to be a major barrier. Whether such considerations are important enough to establish a presumption against secrecy is beyond the scope of this post. But the empirical tendency of secrecy regimes to expand their remit and endure indefinitely should be taken seriously. 7. Conclusion: espionage and existential risk ============================================= Espionage is most likely to be significant if discontinuous progress in AI can be achieved on the basis of key abstract insights. To the extent that the scaling hypothesis is true, espionage is likely to be less important. But even if the scaling hypothesis is true, espionage may be significant if it transfers engineering knowledge or tacit knowledge (which can be transferred either by exfiltrating agents or rendering what was tacit explicit). Espionage during the Manhattan Project may have accelerated Soviet nuclear weapons development by two to three years, which does not sound like much, but may have altered the course of the early Cold War. This was achieved by the [less than 0.1%](https://docs.google.com/document/d/1TFOF3rIMGLBg80Wr8-GWwuFh7ipcjnYXUVLuDPpDC7Y/edit#heading=h.gppzuj2ox6yg) of Manhattan Project employees who were Soviet spies (part of the effectiveness of this small group may have been due to the disproportionate representation of high-ranking employees among spies). If a technology is truly transformative, even a small gain in speed is strategically significant. On balance, AI espionage is likely to increase existential risk. In [“Racing to the Precipice”](https://link.springer.com/article/10.1007%2Fs00146-015-0590-y) Stuart Armstrong, Nick Bostrom, and Carl Shulman create a game theoretic model of AI arms races’ effects on safety. Armstrong et al. find that risks are greatest when enmity between competitors is high, knowledge of other projects is available, and (conditional on knowledge of other projects being available) the leader has only a small lead. One should expect espionage to increase enmity between competitors, increase knowledge of competitors’ projects, and reduce the distance between the leader and the laggards. Thus, to the extent that Armstrong et al.’s model reflects the real strategic situation, the expected impact of espionage is to increase existential risk. Eoghan Stafford, Robert Trager, and Allan Dafoe’s forthcoming “International Strategic Dynamics of Risky Technology Races” builds a more complex model. Like Armstrong et al., Stafford et al. find that enmity increases risk in all situations. However, whereas Armstrong et al. find that a close race is more dangerous, Stafford et al. find that under certain circumstances, close races are less dangerous than very uneven races. If, in Stafford et al.’s model, enmity between leader and laggard is high and the laggard is far behind, compromising on safety might seem to be the only way that the laggard can have a chance of winning. But in a more even race, the laggard might be less willing to compromise on safety because they would have a chance of winning without taking extreme risks. Thus, granting for the sake of the argument that the assumptions of Stafford et al.’s model hold, espionage’s tendency to narrow gaps might, under some circumstances, reduce existential risk. However, this consideration would seem to me to be outweighed by espionage’s tendency to increase enmity.  It therefore may be valuable for people concerned about existential risk to contribute to preventing AI espionage even if they have no preference between the project being spied on and the project doing the spying. On the other hand, secrecy (the most obvious countermeasure to espionage) may increase existential risk by worsening issues with interpretability. And subjecting AI researchers to background checks may asymmetrically weaken conscientious projects as their competitors, not worried about existential risk or espionage, will gain from the talent that they reject. All of these considerations should be carefully weighed by AI policy practitioners before deciding to prioritize or deprioritize preventing espionage.
c357c0f6-caf8-485d-afa2-e1402530499c
trentmkelly/LessWrong-43k
LessWrong
Door Ideas The front door to our apartment doesn't really have space to swing. When we first moved in it opened to the right, blocking part of the hallway and keeping you from carrying groceries straight through to the kitchen: This was annoying, so after some discussion with housemates I turned it around: In practice this is maybe better, maybe worse, still pretty annoying. We've considered removing the door: But this would let kid-noise carry more down to Julia's office, wouldn't work if we had a housemate with a small child or cat, and if at some point the people downstairs were not close friends it would be a little weird. We could have the door open out instead: But this would be too awkward to open, since the landing is so small that the door kind of needs to swing through you. So, it currently still looks like: One idea is I could cut it down the middle, and let it fold a second time: This gets it out of the way when it's open, but at the cost of having a really very large swing. Perhaps the door should open in the middle? Very narrow for a double door, and it still sticks out a bit on the left. There's extra space on the right, though, so you could make it asymmetrical? Or, even one door going in and one going out? This would be a bit more work, but we could put a track on the ceiling that allowed the door to slide around a curve, opening outwards but with a much reduced sweep: Predictions on what we'd end up liking best?
789ebb73-43c2-48dd-b02a-a3b7cc1f451a
trentmkelly/LessWrong-43k
LessWrong
Cheap Whiteboards! Take a large thick cardboard box. Cut it into planar pieces. Take transparent tape and cover the cardboard's surface with it. Tadaa! A whiteboard: Because transparent tape is smooth the whiteboard pen is erasable. Sticky notes stick a lot better on the transparent tape than on the cardboard, which is useful for Kanban Boards! Ideally, put white paper between the cardboard box and the tape to get an actual white board. This method can be applied to convert any object into a whiteboard, like a big piece of wood, the side of a shelf, your forehead, etc. (Originally my intention was to make sticky notes stick better, and accidentally discovered that I made a whiteboard. That's why I did not use white paper.) (I'm currently unsure if there is a problem with the pens becoming unerasable when leaving them on for a long period of time, or any other problems, as I just came up with this.)
48ae522c-672f-4e3d-8b60-259046ce62aa
StampyAI/alignment-research-dataset/blogs
Blogs
Acme: A new framework for distributed reinforcement learning Overall, the high-level goals of Acme are as follows: 1. To enable the reproducibility of our methods and results  — this will help clarify what makes an RL problem hard or easy, something that is seldom apparent. 2. To simplify the way we (and the community at large) design new algorithms — we want that next RL agent to be easier for everyone to write! 3. To enhance the readability of RL agents — there should be no hidden surprises when transitioning from a paper to code. In order to enable these goals, the design of Acme also bridges the gap between large-, medium-, and small-scale experiments. We have done so by carefully thinking about the design of agents at many different scales. At the highest level, we can think of Acme as a classical RL interface (found in any introductory RL text) which connects an actor (i.e. an action-selecting agent) to an environment. This actor is a simple interface which has methods for selecting actions, making observations, and updating itself. Internally, learning agents further split the problem up into an “acting” and a “learning from data” component. Superficially, this allows us to re-use the acting portions across many different agents. However, more importantly this provides a crucial boundary upon which to split and parallelize the learning process. We can even scale down from here and seamlessly attack the batch RL setting where there exists *no environment* and only a fixed dataset. Illustrations of these different levels of complexity are shown below: ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c6b6b755e529e96ea6c6_Fig%201.gif)This design allows us to easily create, test, and debug novel agents in small-scale scenarios before scaling them up — all while using the same acting and learning code. Acme also provides a number of useful utilities from checkpointing, to snapshotting, to low-level computational helpers. These tools are often the unsung heroes of any RL algorithm, and in Acme we strive to keep them as simple and understandable as possible. To enable this design Acme also makes use of [Reverb](https://deepmind.com/research/open-source/Reverb): a novel, efficient data storage system purpose built for machine learning (and reinforcement learning) data. Reverb is primarily used as a system for experience replay in distributed reinforcement learning algorithms, but it also supports other data structure representations such as FIFO and priority queues. This allows us to use it seamlessly for on- and off-policy algorithms. Acme and Reverb were designed from the beginning to play nicely with one another, but Reverb is also fully usable on its own, so go check it out! Along with our infrastructure, we are also releasing single-process instantiations of a number of agents we have built using Acme. These run the gamut from continuous control (D4PG, MPO, etc.), discrete Q-learning (DQN and R2D2), and more. With a minimal number of changes — by splitting across the acting/learning boundary — we can run these same agents in a distributed manner. Our first release focuses on single-process agents as these are the ones mostly used by students and research practitioners. We have also carefully benchmarked these agents on a number of environments, namely the [control suite](https://deepmind.com/research/publications/deepmind-control-suite), [Atari](https://github.com/mgbellemare/Arcade-Learning-Environment), and [bsuite](https://deepmind.com/research/open-source/bsuite). ###### Playlist of videos showing agents trained using Acme framework While additional results are readily available in our [paper](https://arxiv.org/abs/2006.00979), we show a few plots comparing the performance of a single agent (D4PG) when measured against both actor steps and wall clock time for a continuous control task. Due to the way in which we limit the rate at which data is inserted into replay — refer to the paper for a more in-depth discussion — we can see roughly the same performance when comparing the rewards an agent receives versus the number of interactions it has taken with the environment (actor steps). However, as the agent is further parallelised we see gains in terms of how fast the agent is able to learn. On relatively small domains, where the observations are constrained to small feature spaces, even a modest increase in this parallelisation (4 actors) results in an agent that takes under half the time to learn an optimal policy: ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c7165eaff527cb3d52f4_Fig%202.jpg)But for even more complex domains where the observations are images that are comparatively costly to generate we see much more extensive gains: ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6228c723cd9c1976c2f08330_Fig%203.jpg)And the gains can be even bigger still for domains such as Atari games where the data is more expensive to collect and the learning processes generally take longer. However, it is important to note that these results share the same acting and learning code between both the distributed and non-distributed setting. So it is perfectly feasible to experiment with these agents and results at a smaller scale — in fact this is something we do all the time when developing novel agents! ‍ For a more detailed description of this design, along with further results for our baseline agents, see our [paper](https://arxiv.org/abs/2006.00979). Or better yet, take a look at our [GitHub repository](https://github.com/deepmind/acme) to see how you can start using Acme to simplify your own agents!
f42c2f32-b73f-4749-b570-158011d90ea7
trentmkelly/LessWrong-43k
LessWrong
Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? Months and months ago, when COVID-19 first broke, one of the most concerning aspects of the disease was the possibility that it might produce long-term chronic fatigue, based on a comparison with SARS (apparently a large percentage of the people who had SARS in 2003, had chronic fatigue symptoms years later, though I can't find the paper right now), plus some other evidence. At the time, we didn't have much data, but now we're a few months into the pandemic. Obviously, we won't know how long lasting it is, but what are the updated risk estimates of chronic fatigue from COVID?
5ec80b16-55d4-40f0-ba4f-2c8b78ca2694
StampyAI/alignment-research-dataset/arxiv
Arxiv
Open-Ended Learning Leads to Generally Capable Agents 1 Introduction --------------- ††footnotetext: \*Authors ordered alphabetically by first name. More details in [Author Contributions](#Sx1 "Author Contributions ‣ Open-Ended Learning Leads to Generally Capable Agents"). Correspondence to jaderberg@deepmind.com Over recent years, deep reinforcement learning (deep RL) has repeatedly yielded highly performant artificial agents across a range of training domains (Silver et al., [2017](#bib.bib92); OpenAI et al., [2019](#bib.bib68); Mirhoseini et al., [2021](#bib.bib62)). The marriage of expressive neural network architectures, together with scalable and general reinforcement learning algorithms to train these networks, has resulted in agents that can outperform humans on the complex simulated games they were trained on (Mnih et al., [2015](#bib.bib63)). In addition, through *multi-agent* deep RL, agents have also demonstrated impressive robustness to held-out opponents – opponents that were never encountered during training (Jaderberg et al., [2019](#bib.bib44)). Some of the most salient examples include robustness to the top human professional players (Silver et al., [2016](#bib.bib91); Vinyals et al., [2019](#bib.bib107); Berner et al., [2019](#bib.bib10)). However, these agents are often constrained to play only the games they were trained for – whilst the exact instantiation of the game may vary (*e.g.* the layout, initial conditions, opponents) the goals the agents must satisfy remain the same between training and testing. Deviation from this can lead to catastrophic failure of the agent. In this work we move towards creating an artificial agent whose behaviour generalises beyond the set of games it was trained on, an agent which is robust and generally capable across a vast evaluation space of games. By training an agent effectively across a massively multi-task continuum we obtain a neural network policy that exhibits general heuristic behaviours, allowing it to score reward in all humanly solvable tasks in our held-out evaluation task set. In addition, we see the agent being capable in tasks that not only are explicitly held-out from training, but lie far outside of its training distributions, including versions of hide and seek (Baker et al., [2020](#bib.bib5)) and capture the flag (Jaderberg et al., [2019](#bib.bib44)). To produce a vast and diverse continuum of training and evaluation tasks we develop an environment space, dubbed *XLand*, that permits procedural generation of rich 3D worlds and multiplayer games (described by the goals of the players). These span both two- and three-player tasks, highly competitive and completely cooperative as well as mixtures of both, balanced and imbalanced games, and strategically deep games (*e.g.* Capture the Flag or XRPS, see Section [3.2.3](#S3.SS2.SSS3 "3.2.3 Game Diversity ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents")). The capabilities asked of players include visual scene understanding, navigation, physical manipulation, memory, logical reasoning, and theory of mind. To train agents in this environment space, we first define a multi-dimensional measure of performance, *normalised score percentiles*, which characterises agent performance and robustness across the evaluation task space. We create an open-ended training process to iteratively improve the spectrum of normalised score percentiles. The training process uses deep RL at its core with an attention-based neural network architecture allowing implicit modelling of goals of the game which are provided to the agent. The training tasks consumed by the agent are dynamically generated in response to the agent’s performance, with the generating function constantly changing to keep a population of agents improving across all percentiles of normalised score. This population training is repeated multiple times sequentially, each generation of agents bootstrapping their performance from previous generations with policy distillation, each generation of agents contributing new policies to train against in this multiplayer environment, and each generation redefining the normalised score percentiles as the frontier of performance across task space is advanced. From experimental results we demonstrate the clear benefit of each component of this learning process, with the dynamic task generation being particularly important for learning compared to uniform sampling from task space. ![Refer to caption](/html/2107.12808/assets/figures/fig1_v4_paper.png) Figure 2: Visualisation of the XLand environment space. (Left) Each dot corresponds to a single *game* and is positioned by a 2D UMAP embedding of distance between games, with the size of the dot corresponding to the *balance* of the game, and the colour representing competitiveness of the game (from blue – completely competitive, to purple – completely cooperative). (Right) Each game can be played on a myriad of *worlds*, which we can smoothly mutate to traverse a diverse set of physical challenges. (Bottom) An XLand *task* consists of combining a game with a world and co-players. The result of this training process is an agent that is generally capable across the held-out evaluation space. Qualitatively, we observe the agent exhibiting behaviours that are generally applicable, rather than optimal for any specific task. Examples of such behaviours include: experimentation through directed exploration until the agent recognises a rewarding state has been achieved; seeking another player out to gather information of its state irrespective of its goal; and tagging another player if it is holding an object that is related to the agent’s goal irrespective of that player’s intention. We also probe quantitatively the behaviour of agents in test-time multi-agent situations and see evidence of cooperation emerging with training. In addition to the agent exhibiting zero-shot capabilities across a wide evaluation space, we show that finetuning on a new task for just 100 million steps (around 30 minutes of compute in our setup) can lead to drastic increases in performance relative to zero-shot, and relative to training from scratch which often fails completely. The paper is organised as follows: first we introduce the XLand environment space in Section [2](#S2 "2 XLand Environment Space ‣ Open-Ended Learning Leads to Generally Capable Agents") followed by an exploration of the quantitative properties of this environment space in Section [3](#S3 "3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents"). In Section [4](#S4 "4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents") we introduce the goal, metric, and evaluation space we use to measure progress in the open-ended environment. In Section [5](#S5 "5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents") we detail the different components of our learning system and how these work together. Section [6](#S6 "6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") describes the experimental results, dynamics, and analysis of the produced agent. Finally, Section [7](#S7 "7 Related Work ‣ Open-Ended Learning Leads to Generally Capable Agents") gives an overview of some related works, followed by the conclusions of this work in Section [8](#S8 "8 Conclusions ‣ Open-Ended Learning Leads to Generally Capable Agents"). All proofs and experimental details can be found in the Appendices. 2 XLand Environment Space -------------------------- To promote the emergence of general behaviour in reinforcement learning, we seek an environment that exhibits dimensions of consistency across tasks, as well as dimensions of smooth variation. The development of an environment exhibiting smooth vastness with consistency is central to this work, and as such, we introduce the *XLand* environment space. XLand is a 3D environment consisting of static topology together with dynamic objects simulated by rigid-body physics, with multiple players (controllable by both humans or agents) perceiving first person observations and egocentric movement akin to DM-Lab (Beattie et al., [2016](#bib.bib9)) and Quake III: Arena (Jaderberg et al., [2019](#bib.bib44)). Players have different world-affecting gadgets at their disposal, are able to carry and hold dynamic objects, and receive reward at each timestep based on the state of the environment: relationships between players, objects, and topology. The environment is developed and simulated using the Unity framework from Ward et al. ([2020](#bib.bib111)), with an example task seen in Figure [1](#S0.F1 "Figure 1 ‣ Open-Ended Learning Leads to Generally Capable Agents"). Consistency in this environment comes from: players always having the same control interface, observation specification, gadget dynamics, and movement dynamics; objects being simulated with similar physical properties; and a limited set of topological building blocks. However, the remainder of the environment properties are vastly but also smoothly variable: the layout and structure of topological building blocks, the positions of objects, the lighting, and crucially the specification of rewarding states for each player. Finally, from a single player’s perspective, the policies of the co-players can be vastly but smoothly variable. The XLand task space, from the perspective of the target player (*e.g.* an agent), denoted as ℵℵ\alephroman\_ℵ, is a Cartesian product of all possible worlds 𝐰∈𝒲𝐰𝒲\mathbf{w}\in\mathcal{W}bold\_w ∈ caligraphic\_W, games 𝐆∈𝔊𝐆𝔊\mathbf{G}\in\mathfrak{G}bold\_G ∈ fraktur\_G (defined as one goal 𝐠i∈𝒢subscript𝐠𝑖𝒢\mathbf{g}\_{i}\in\mathcal{G}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_G for each of the n𝑛nitalic\_n players), and the policies πi∈Πsubscript𝜋𝑖Π\pi\_{i}\in\Piitalic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ roman\_Π of each of the remaining n−1𝑛1n-1italic\_n - 1 players (the players of the game not including the target player). Formally | | | | | --- | --- | --- | | | ℵ:=𝒲×⋃n=1∞[𝒢n×Πn−1].assignℵ𝒲superscriptsubscript𝑛1delimited-[]superscript𝒢𝑛superscriptΠ𝑛1\aleph:=\mathcal{W}\times\bigcup\_{n=1}^{\infty}\left[\mathcal{G}^{n}\times\Pi^{n-1}\right].roman\_ℵ := caligraphic\_W × ⋃ start\_POSTSUBSCRIPT italic\_n = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT [ caligraphic\_G start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT × roman\_Π start\_POSTSUPERSCRIPT italic\_n - 1 end\_POSTSUPERSCRIPT ] . | | Under this definition, each XLand task | | | | | --- | --- | --- | | | 𝐱=(𝐰,(𝐠1,…,𝐠n),(π2,…,πn))∈ℵ𝐱𝐰subscript𝐠1…subscript𝐠𝑛subscript𝜋2…subscript𝜋𝑛ℵ\mathbf{x}=(\mathbf{w},(\mathbf{g}\_{1},\dots,\mathbf{g}\_{n}),(\pi\_{2},\dots,\pi\_{n}))\in\alephbold\_x = ( bold\_w , ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) , ( italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) ∈ roman\_ℵ | | can be seen as a regular POMDP over a simulation state space 𝒮𝒮\mathcal{S}caligraphic\_S. For notational simplicity we often refer to the policy of the target player as either π𝜋\piitalic\_π or π1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. At each timestep t𝑡titalic\_t, each player πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT receives its player-centric observations 𝐨ti:=(fi(𝐬t),𝐠i)assignsuperscriptsubscript𝐨𝑡𝑖subscript𝑓𝑖subscript𝐬𝑡subscript𝐠𝑖\mathbf{o}\_{t}^{i}:=(f\_{i}(\mathbf{s}\_{t}),\mathbf{g}\_{i})bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT := ( italic\_f start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), where fisubscript𝑓𝑖f\_{i}italic\_f start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT extracts a pixel-based render of the state of the world from the perspective of player i𝑖iitalic\_i and also provides the prioperception readings (*e.g.* whether a player is holding something). Note, the reward from the environment is not included in player observations. Based on these observations, an action 𝐚tisuperscriptsubscript𝐚𝑡𝑖\mathbf{a}\_{t}^{i}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT of each player is sampled from its corresponding policy 𝐚ti∼πi(𝐡ti)similar-tosuperscriptsubscript𝐚𝑡𝑖subscript𝜋𝑖superscriptsubscript𝐡𝑡𝑖\mathbf{a}\_{t}^{i}\sim\pi\_{i}(\mathbf{h}\_{t}^{i})bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ), where 𝐡ti=(𝐨1i,…,𝐨ti)superscriptsubscript𝐡𝑡𝑖superscriptsubscript𝐨1𝑖…superscriptsubscript𝐨𝑡𝑖\mathbf{h}\_{t}^{i}=(\mathbf{o}\_{1}^{i},\dots,\mathbf{o}\_{t}^{i})bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT = ( bold\_o start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT , … , bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) is a sequence of observations perceived so far. The initial state of the simulation is uniquely identified by 𝐰𝐰\mathbf{w}bold\_w. The simulation is terminated after a fixed number of T=900𝑇900T=900italic\_T = 900 iterations (two minutes when simulated in real-time). The transition function comes from the simulation’s physics engine that calculates the new state 𝐬t+1subscript𝐬𝑡1\mathbf{s}\_{t+1}bold\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT from its current state 𝐬tsubscript𝐬𝑡\mathbf{s}\_{t}bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT given the simultaneous actions of all the players involved in a specific task (𝐚ti)i=1nsuperscriptsubscriptsuperscriptsubscript𝐚𝑡𝑖𝑖1𝑛(\mathbf{a}\_{t}^{i})\_{i=1}^{n}( bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_i end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT, analogously to other multi-agent real-time environments (Vinyals et al., [2019](#bib.bib107); Berner et al., [2019](#bib.bib10); Jaderberg et al., [2019](#bib.bib44)). From the perspective of a single player (such as a learning agent), actions of all the co-players can be seen as part of the transition function, and thus the whole process relies only on 𝐚t1superscriptsubscript𝐚𝑡1\mathbf{a}\_{t}^{1}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT, the action of the target player. The reward function rt:𝒮→{0,1}:subscript𝑟𝑡→𝒮01r\_{t}:\mathcal{S}\rightarrow\{0,1\}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT : caligraphic\_S → { 0 , 1 } returns 1 if and only if a player’s goal is satisfied in the current simulation state. Consequently, on a given task, a player’s goal is to maximise the expected future discounted††For notation simplicity we will omit the dependence of all returns/values on the discount factor value γ𝛾\gammaitalic\_γ. number of timesteps in which its goal is satisfied | | | | | --- | --- | --- | | | 𝐕π(𝐱):=𝔼[Rπ(𝐱)]=𝔼[∑t=1Tγtrt].assignsubscript𝐕𝜋𝐱𝔼delimited-[]subscript𝑅𝜋𝐱𝔼delimited-[]superscriptsubscript𝑡1𝑇superscript𝛾𝑡subscript𝑟𝑡\mathbf{V}\_{\pi}(\mathbf{x}):=\mathbb{E}\left[R\_{\pi}(\mathbf{x})\right]=\mathbb{E}\left[\sum\_{t=1}^{T}\gamma^{t}r\_{t}\right].bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) := blackboard\_E [ italic\_R start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) ] = blackboard\_E [ ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] . | | We will now describe in more detail the makeup of the XLand environment, separating out the initial conditions of the physical environment space, *worlds*, from the specification of rewarding states for each player, *games*. We will highlight the vastness and smoothness of these components of XLand, and finally how these components combine and interact to form a vast and complex space of *tasks*, Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Open-Ended Learning Leads to Generally Capable Agents"). ### 2.1 World Space Tasks in XLand are embedded within 3D physically simulated worlds, an example of which shown in Figure [1](#S0.F1 "Figure 1 ‣ Open-Ended Learning Leads to Generally Capable Agents"). The layout of the topology, the initial locations of the objects, the initial locations of the players, and the gadgets at each players’ disposal are central to the behaviour being asked of a capable player in this task. For example, consider the simple game consisting of a single player, which receives reward when the player is near a purple sphere. If the player is initially located next to the purple sphere, the player needs to simply stand still. If the purple sphere is initially located out of sight of the player, the player must search for the object. The topology could provide navigational challenges to this search, requiring analysis of connected paths and memory to quickly find the object. The physical interaction between the initial location of the sphere and the topology or other objects could cause the sphere to roll, requiring the player to intercept the sphere once it is found, and if the player has a freeze gadget this would allow the player to stop the sphere rolling by freezing its motion. The initial condition of a simulated world defines the possible challenges faced by a player somewhat independently of the game, the goals of the players. As such, we define the world 𝐰𝐰\mathbf{w}bold\_w as the initial state of the simulated 3D world and its constituents, the state at the beginning of each episode of play. The three main components of a world are the topology, objects, and players. Worlds are procedurally generated (Shaker et al., [2016](#bib.bib89)). ##### Topology A world in XLand contains a static topology which defines the unmovable landscape that is navigated by the players, surrounded by four walls which enclose the rectangular playable area, with variable lighting conditions. The topology is generated by first selecting a rectangular size of the world which encloses a grid, and subsequently placing a number of predefined 3D topological tiles. These tiles can be placed in any arrangement but cannot violate local neighbourhood connectivity constraints, ensuring that the arrangement of 3D tiles forms congruent and connected playable regions. ##### Objects Objects are elements of XLand worlds that are dynamic – they undergo physics simulation and can be manipulated by players. Each world defines a specified initial location for each movable object as well as its orientation, shape, colour and size. Object instances vary in size, colour, and shape. There are three colours – black, purple, yellow – and four shapes – cube, sphere, pyramid, slab. ##### Players The players of the game, which can be controlled by agents, are given initial positions in the same manner as objects. Players are coloured, and in this work we consider up to three players, each being assigned a unique colour of either blue, red, or green. In addition, each player is assigned a gadget: either the freeze gadget or the tagging gadget. The freeze gadget can be used by a player only on an object and has the effect of freezing the dynamics of the object so that it remains static and unmovable for 5 seconds, before becoming dynamic again and undergoing physics simulation as normal. The tagging gadget can be used by a player on an object or another player and has the effect of removing the object or player from the world for 3 seconds, before the object or player is returned to the world at its initial location, rather than the location at which it was removed. An instance of a world 𝐰𝐰\mathbf{w}bold\_w is therefore a particular topology, combined with a set of objects with locations, and a particular set of players with locations and gadgets. An agent playing in a world 𝐰𝐰\mathbf{w}bold\_w will always experience identical initial conditions. Our process of generating worlds leads to a vast and smooth space of worlds, with these properties explored further in Section [3.1](#S3.SS1 "3.1 World Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents"). More details of this process can be found in Section [A.1.1](#A1.SS1.SSS1 "A.1.1 Procedural World Generation ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents") and Figure [31](#A1.F31 "Figure 31 ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). ### 2.2 Game Space Whilst a world defines the initial state of the simulated physical space for the players to act in, a task requires a *game* for these players to act towards. A game 𝐆𝐆\mathbf{G}bold\_G consists of a goal 𝐠i∈𝒢subscript𝐠𝑖𝒢\mathbf{g}\_{i}\in\mathcal{G}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ caligraphic\_G for each of the n𝑛nitalic\_n players, 𝐆=(𝐠1,…,𝐠n)𝐆subscript𝐠1…subscript𝐠𝑛\mathbf{G}=(\mathbf{g}\_{1},\ldots,\mathbf{g}\_{n})bold\_G = ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ). A goal defines the reward function for the associated player, and each player is tasked with acting in a way to maximise their total reward, while perceiving only their own goal (and not seeing goals of the co-players). The state of our simulated environment 𝐬∈𝒮𝐬𝒮\mathbf{s}\in\mathcal{S}bold\_s ∈ caligraphic\_S describes the physical world the players interact with. 𝐬𝐬\mathbf{s}bold\_s consists of the positions of all the objects, players, their orientations, velocities, *etc*. We define a set of atomic predicates ϕj:𝒮→{0,1}:subscriptitalic-ϕ𝑗→𝒮01\phi\_{j}:\mathcal{S}\rightarrow\{0,1\}italic\_ϕ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT : caligraphic\_S → { 0 , 1 } in the form of a physical relation applied to some of the entities present in the state. These relations include: being near, on, seeing, and holding, as well as their negations, with the entities being objects, players, and floors of the topology. An example predicate could be near(purple sphere, opponent), which is going to return 1 if and only if one of the co-players is currently close to a purple sphere. With the set of possible predicates fixed, a goal of a player can be represented by a set of options (disjunctions) over sets of necessary predicates for this option (conjunctions). Consequently, an example goal could look like | | | | | --- | --- | --- | | | 𝐠=(ϕj1∧ϕj2)⏟option 1∨(ϕj2∧ϕj3∧ϕj4)⏟option 2𝐠option 1⏟subscriptitalic-ϕsubscript𝑗1subscriptitalic-ϕsubscript𝑗2option 2⏟subscriptitalic-ϕsubscript𝑗2subscriptitalic-ϕsubscript𝑗3subscriptitalic-ϕsubscript𝑗4\mathbf{g}=\underset{\text{option 1}}{\underbrace{(\phi\_{j\_{1}}\wedge\phi\_{j\_{2}})}}\vee\underset{\text{option 2}}{\underbrace{(\phi\_{j\_{2}}\wedge\phi\_{j\_{3}}\wedge\phi\_{j\_{4}})}}bold\_g = underoption 1 start\_ARG under⏟ start\_ARG ( italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∧ italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) end\_ARG end\_ARG ∨ underoption 2 start\_ARG under⏟ start\_ARG ( italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∧ italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∧ italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) end\_ARG end\_ARG | | which, for some example predicates, could mean *“Hold a purple sphere (ϕj1subscriptitalic-ϕsubscript𝑗1\phi\_{j\_{1}}italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) while being near a yellow sphere (ϕj2subscriptitalic-ϕsubscript𝑗2\phi\_{j\_{2}}italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) or be near a yellow sphere (ϕj2subscriptitalic-ϕsubscript𝑗2\phi\_{j\_{2}}italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) while seeing an opponent (ϕj3subscriptitalic-ϕsubscript𝑗3\phi\_{j\_{3}}italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) who is not holding the yellow sphere (ϕj4subscriptitalic-ϕsubscript𝑗4\phi\_{j\_{4}}italic\_ϕ start\_POSTSUBSCRIPT italic\_j start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT)”*. This is a canonical representation of Boolean formulas, the disjunctive normal form (DNF), which can express any Boolean formula (Davey and Priestley, [2002](#bib.bib21)). The corresponding reward function r𝐠(𝐬)subscript𝑟𝐠𝐬r\_{\mathbf{g}}(\mathbf{s})italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) follows the transformation of disjunctions becoming sums, and conjunctions becoming products, *i.e.* for a goal 𝐠:=⋁i=1k[⋀j=1niϕij]assign𝐠superscriptsubscript𝑖1𝑘delimited-[]superscriptsubscript𝑗1subscript𝑛𝑖subscriptitalic-ϕ𝑖𝑗\mathbf{g}:=\bigvee\_{i=1}^{k}[\bigwedge\_{j=1}^{n\_{i}}\phi\_{ij}]bold\_g := ⋁ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ ⋀ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT ]: | | | | | --- | --- | --- | | | r𝐠(𝐬)=maxi=1k⁡[minj=1ni⁡ϕij(𝐬)]=min⁡(∑i=1k∏j=1niϕij(𝐬),1).subscript𝑟𝐠𝐬superscriptsubscript𝑖1𝑘superscriptsubscript𝑗1subscript𝑛𝑖subscriptitalic-ϕ𝑖𝑗𝐬superscriptsubscript𝑖1𝑘superscriptsubscriptproduct𝑗1subscript𝑛𝑖subscriptitalic-ϕ𝑖𝑗𝐬1r\_{\mathbf{g}}(\mathbf{s})=\max\_{i=1}^{k}\left[\min\_{j=1}^{n\_{i}}\phi\_{ij}(\mathbf{s})\right]=\min\left(\sum\_{i=1}^{k}\prod\_{j=1}^{n\_{i}}\phi\_{ij}(\mathbf{s}),1\right).italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) = roman\_max start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ roman\_min start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT ( bold\_s ) ] = roman\_min ( ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT ∏ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT ( bold\_s ) , 1 ) . | | A simple example in our game space would be the game of hide and seek. The two-player version of the game consists of two goals (𝐠seek,𝐠hide)subscript𝐠seeksubscript𝐠hide(\mathbf{g}\_{\text{seek}},\mathbf{g}\_{\text{hide}})( bold\_g start\_POSTSUBSCRIPT seek end\_POSTSUBSCRIPT , bold\_g start\_POSTSUBSCRIPT hide end\_POSTSUBSCRIPT ) where the goal of one player consists of just one option of one predicate, 𝐠seek=ϕseek=see(me, opponent)subscript𝐠seeksubscriptitalic-ϕseeksee(me, opponent)\mathbf{g}\_{\text{seek}}=\phi\_{\text{seek}}=\texttt{see(me, opponent)}bold\_g start\_POSTSUBSCRIPT seek end\_POSTSUBSCRIPT = italic\_ϕ start\_POSTSUBSCRIPT seek end\_POSTSUBSCRIPT = see(me, opponent), and the goal of the co-player is 𝐠hide=ϕhide=not(see(opponent, me))subscript𝐠hidesubscriptitalic-ϕhidenot(see(opponent, me))\mathbf{g}\_{\text{hide}}=\phi\_{\text{hide}}=\texttt{not(see(opponent, me))}bold\_g start\_POSTSUBSCRIPT hide end\_POSTSUBSCRIPT = italic\_ϕ start\_POSTSUBSCRIPT hide end\_POSTSUBSCRIPT = not(see(opponent, me)). This general construction of games allows us to represent a vast number of highly diverse games, ranging from simple games of finding an object to complex, strategically deep games. Importantly, the space of games is also smooth, allowing for gradual transition between games. These properties are explored in Section [3.2](#S3.SS2 "3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents"). ### 2.3 Task Space A task in XLand 𝐱𝐱\mathbf{x}bold\_x is the combination of a world 𝐰𝐰\mathbf{w}bold\_w, a game 𝐆𝐆\mathbf{G}bold\_G and the policies of the *co-players* (π2,…,πn)subscript𝜋2…subscript𝜋𝑛(\pi\_{2},\dots,\pi\_{n})( italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ). With this view, despite its clearly multi-agent nature, we can view each task as a standard single-player problem for π1subscript𝜋1\pi\_{1}italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. The combination of a world, a game, and co-players can interact in complex ways to shape the space of optimal behaviours required of the player. Consider the example game where the player has a goal consisting of two options *“Hold a purple sphere or hold a yellow sphere”* and there is one co-player with the identical goal. If the game is played in a fully open world where initially both rewarding objects are visible, the challenge to obtain the optimal behaviour is to choose to navigate to the closest object. If the paths to each object are occluded along the route, the optimal behaviour might require memory to reach its goal object, remembering the path to take. If the world is such that only one of the objects is initially visible but out of reach on a higher floor, the optimal behaviour may be to manipulate another object to reach the goal object. Now consider the variation of co-player policies. If the co-player picks up the purple sphere and moves away quickly, the optimal behaviour of the player may be to ignore the purple sphere and navigate to hold the yellow sphere. However, if the co-player seeks out the player and uses its tagging gadget on sight, hindering the player’s ability to navigate to the goal object, the optimal behaviour of the player may be to avoid being seen or to tag the co-player itself, before navigating to a goal object. A result of this complex interaction is that the cross product of a set of worlds, games, and co-player policies creates a set of tasks with challenges – optimal behaviours of the player – which is larger than the sum of the number of worlds, games, and co-player policies. 3 Environment Properties ------------------------- The previous section introduced the XLand environment space and its tasks’ construction from worlds, games, and co-players. In this section we analytically and empirically explore some of the properties of this space, focusing on world and game properties independently. In both cases we explain how these components give rise to the properties of vastness, diversity, and smoothness. ### 3.1 World Properties The worlds are high dimensional objects consisting of topology, object locations, and player locations. To highlight the characteristics of worlds, we can describe a world in terms of the navigational challenges it poses due to the topology and the objects. Our worlds are all grid-aligned, with varied dimensions of each single *tile*, some of which (ramps) one can use to navigate to a higher level. We consider two world representations: first, the height map, τ(𝐰):𝒲→[0,1]w×h:𝜏𝐰→𝒲superscript01𝑤ℎ\tau(\mathbf{w}):\mathcal{W}\rightarrow[0,1]^{w\times h}italic\_τ ( bold\_w ) : caligraphic\_W → [ 0 , 1 ] start\_POSTSUPERSCRIPT italic\_w × italic\_h end\_POSTSUPERSCRIPT where w,h𝑤ℎw,hitalic\_w , italic\_h are the width and height of the world respectively, and each element in τ(𝐰)𝜏𝐰\tau(\mathbf{w})italic\_τ ( bold\_w ) is the height of the top of the tile at the location of the element. The second representation is a world topology graph, representing navigation paths. ###### Definition 1 (World topology graph). For a given world 𝐰𝐰\mathbf{w}bold\_w, we define a directed graph G𝐰=(V𝐰,E𝐰)subscript𝐺𝐰subscript𝑉𝐰subscript𝐸𝐰G\_{\mathbf{w}}=(V\_{\mathbf{w}},E\_{\mathbf{w}})italic\_G start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT = ( italic\_V start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT , italic\_E start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT ) where each tile of a world is represented as a vertex, and an edge exists between two vertices visubscript𝑣𝑖v\_{i}italic\_v start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and vjsubscript𝑣𝑗v\_{j}italic\_v start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT if and only if it is possible for a player to travel between the two neighbouring tiles in a straight line (they are on the same level, the height of vjsubscript𝑣𝑗v\_{j}italic\_v start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is lower so the agent can fall to it, or vjsubscript𝑣𝑗v\_{j}italic\_v start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is an accessible ramp leading to a higher level). Given this graph, we can define various proxy measures of navigational complexity by looking at the distribution of paths between every pair of vertices. ###### Definition 2 (Shortest paths distribution). For a given 𝐰𝐰\mathbf{w}bold\_w we define ρsp(𝐰)subscript𝜌normal-sp𝐰\rho\_{\mathrm{sp}}(\mathbf{w})italic\_ρ start\_POSTSUBSCRIPT roman\_sp end\_POSTSUBSCRIPT ( bold\_w ) as a distribution of lengths of shortest paths between every pair of vertices in G𝐰subscript𝐺𝐰G\_{\mathbf{w}}italic\_G start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT. ###### Definition 3 (Resistance distances distribution). For a given 𝐰𝐰\mathbf{w}bold\_w we define ρ(𝐰)𝜌𝐰\rho(\mathbf{w})italic\_ρ ( bold\_w ) as a distribution of resistance distances (Klein and Randić, [1993](#bib.bib51)) between every pair of vertices in G𝐰subscript𝐺𝐰G\_{\mathbf{w}}italic\_G start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT, where a resistance distance between visubscript𝑣𝑖v\_{i}italic\_v start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and vjsubscript𝑣𝑗v\_{j}italic\_v start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is given by Γii+Γjj−Γij−Γjisubscriptnormal-Γ𝑖𝑖subscriptnormal-Γ𝑗𝑗subscriptnormal-Γ𝑖𝑗subscriptnormal-Γ𝑗𝑖\Gamma\_{ii}+\Gamma\_{jj}-\Gamma\_{ij}-\Gamma\_{ji}roman\_Γ start\_POSTSUBSCRIPT italic\_i italic\_i end\_POSTSUBSCRIPT + roman\_Γ start\_POSTSUBSCRIPT italic\_j italic\_j end\_POSTSUBSCRIPT - roman\_Γ start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT - roman\_Γ start\_POSTSUBSCRIPT italic\_j italic\_i end\_POSTSUBSCRIPT for Γ=(L+1w⋅h𝟏w⋅h×w⋅h)†normal-Γsuperscript𝐿1normal-⋅𝑤ℎsubscript1normal-⋅normal-⋅𝑤ℎ𝑤ℎnormal-†\Gamma=(L+\tfrac{1}{w\cdot h}\mathbf{1}\_{w\cdot h\times w\cdot h})^{\dagger}roman\_Γ = ( italic\_L + divide start\_ARG 1 end\_ARG start\_ARG italic\_w ⋅ italic\_h end\_ARG bold\_1 start\_POSTSUBSCRIPT italic\_w ⋅ italic\_h × italic\_w ⋅ italic\_h end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT † end\_POSTSUPERSCRIPT, L𝐿Litalic\_L being the Laplacian matrix of G𝐰subscript𝐺𝐰G\_{\mathbf{w}}italic\_G start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT and †normal-†\dagger† being the Moore-Penrose pseudoinverse (Penrose, [1955](#bib.bib70)). #### 3.1.1 World Vastness Let us start by discussing the vastness of worlds by looking at how many topographies are possible in XLand. In principle, every combination of tiles could be utilised, creating Nfloorsw⋅h⋅Ntiles⋅superscriptsubscript𝑁floors⋅𝑤ℎsubscript𝑁tilesN\_{\mathrm{floors}}^{w\cdot h}\cdot N\_{\mathrm{tiles}}italic\_N start\_POSTSUBSCRIPT roman\_floors end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_w ⋅ italic\_h end\_POSTSUPERSCRIPT ⋅ italic\_N start\_POSTSUBSCRIPT roman\_tiles end\_POSTSUBSCRIPT possibilities. However, as discussed previously, constraints on tile placements exist to ensure ramps connect levels and there are accessible playable regions. Consequently, it is reasonable to count the number of world topologies where all ramps are properly connected, and that have at least 50% of the world fully accessible (there exists a way to go from every point to any other point within the accessible area). We estimate a lower bound to this quantity with Monte Carlo sampling, and present results in Figure [3](#S3.F3 "Figure 3 ‣ 3.1.1 World Vastness ‣ 3.1 World Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") (see Section [A.1.2](#A1.SS1.SSS2 "A.1.2 Counting worlds ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents") for details). For worlds of size 9 by 9 tiles, (9,9), we have more that 1016superscript101610^{16}10 start\_POSTSUPERSCRIPT 16 end\_POSTSUPERSCRIPT unique topologies (corrected for 8 possible symmetries) – a vast space of worlds. ![Refer to caption](/html/2107.12808/assets/figures/world_count.png) Figure 3: Visualisation of the bounds of the number of possible world topologies of shape (n,n)𝑛𝑛(n,n)( italic\_n , italic\_n ) as a function of a world size n𝑛nitalic\_n. See Section [A.1.2](#A1.SS1.SSS2 "A.1.2 Counting worlds ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents") for details. #### 3.1.2 World Smoothness ![Refer to caption](/html/2107.12808/assets/figures/res_l.png) ![Refer to caption](/html/2107.12808/assets/figures/sp_l.png) ![Refer to caption](/html/2107.12808/assets/figures/hres.png) ![Refer to caption](/html/2107.12808/assets/figures/hsp.png) Figure 4: An empirical visualisation of the world space smoothness. We take a set of worlds of size (9,9) and then apply local mutations up to 30 times. (Top) Each dot on the plot represents one pair of mutated worlds, with the x-axis showing the L2subscript𝐿2L\_{2}italic\_L start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT distance in tile space, and the y-axis showing the Cauchy-Schwarz Divergence between distributions of reachability graph resistances ρ(𝐰)𝜌𝐰\rho(\mathbf{w})italic\_ρ ( bold\_w ) (left) and shortest path distances ρsp(𝐰)subscript𝜌sp𝐰\rho\_{\mathrm{sp}}(\mathbf{w})italic\_ρ start\_POSTSUBSCRIPT roman\_sp end\_POSTSUBSCRIPT ( bold\_w ) (right). The pink line represents the empirical smoothness coefficient. The colour of each dot encodes the number of mutations between the pair of worlds, from 1 (blue) to 30 (red). (Bottom) We linearly embed each of the worlds, trying to find a linear projection where the entropy of the corresponding distribution (in colour) can be well described by a distance from the center of the projection. One can see how small changes in the world space (position) lead to small deviations of the entropy (colour). ![Refer to caption](/html/2107.12808/assets/figures/lotsofworlds.png) Figure 5: An example array of worlds from the XLand environment space. We hypothesise that small changes in the world topography lead to small changes in the overall navigational complexity. To formalise this claim we take a set of 9 by 9 worlds, and then apply local changes to each of them, in the form of moving tiles around, changing floors, *etc*. Given this set of mutated worlds, we plot the relation between the change in the topography | | | | | --- | --- | --- | | | ‖𝐰−𝐰′‖𝒲:=∑i,j=1w,h[τ(𝐰)ij−τ(𝐰′)ij]2assignsubscriptnorm𝐰superscript𝐰′𝒲superscriptsubscript𝑖𝑗 1𝑤ℎsuperscriptdelimited-[]𝜏subscript𝐰𝑖𝑗𝜏subscriptsuperscript𝐰′𝑖𝑗2\|\mathbf{w}-\mathbf{w}^{\prime}\|\_{\mathcal{W}}:=\sqrt{\sum\_{i,j=1}^{w,h}[\tau(\mathbf{w})\_{ij}-\tau(\mathbf{w}^{\prime})\_{ij}]^{2}}∥ bold\_w - bold\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_W end\_POSTSUBSCRIPT := square-root start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i , italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_w , italic\_h end\_POSTSUPERSCRIPT [ italic\_τ ( bold\_w ) start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT - italic\_τ ( bold\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) start\_POSTSUBSCRIPT italic\_i italic\_j end\_POSTSUBSCRIPT ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG | | and the Cauchy-Schwarz Divergence (Nielsen, [2012](#bib.bib67)) | | | | | | --- | --- | --- | --- | | | DCS(p,q)subscriptDCS𝑝𝑞\displaystyle\mathrm{D}\_{\mathrm{CS}}(p,q)roman\_D start\_POSTSUBSCRIPT roman\_CS end\_POSTSUBSCRIPT ( italic\_p , italic\_q ) | :=−H2(p)−H2(q)+2H2×(p,q)assignabsentsubscriptH2𝑝subscriptH2𝑞2subscriptsuperscriptH2𝑝𝑞\displaystyle:=-\mathrm{H}\_{2}(p)-\mathrm{H}\_{2}(q)+2\mathrm{H}^{\times}\_{2}(p,q):= - roman\_H start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_p ) - roman\_H start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_q ) + 2 roman\_H start\_POSTSUPERSCRIPT × end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_p , italic\_q ) | | | | | :=log∫p2(x)𝑑x+log∫q2(x)𝑑xassignabsentsuperscript𝑝2𝑥differential-d𝑥superscript𝑞2𝑥differential-d𝑥\displaystyle:=\log\int p^{2}(x)dx+\log\int q^{2}(x)dx:= roman\_log ∫ italic\_p start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_x ) italic\_d italic\_x + roman\_log ∫ italic\_q start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_x ) italic\_d italic\_x | | | | | −2log∫p(x)q(x)𝑑x,2𝑝𝑥𝑞𝑥differential-d𝑥\displaystyle\phantom{:=}-2\log\int p(x)q(x)dx,- 2 roman\_log ∫ italic\_p ( italic\_x ) italic\_q ( italic\_x ) italic\_d italic\_x , | | between the corresponding shortest paths distributions ρspsubscript𝜌sp\rho\_{\mathrm{sp}}italic\_ρ start\_POSTSUBSCRIPT roman\_sp end\_POSTSUBSCRIPT and resistance distances distributions ρ𝜌\rhoitalic\_ρ. The top row of Figure [4](#S3.F4 "Figure 4 ‣ 3.1.2 World Smoothness ‣ 3.1 World Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") shows that there is a visible linear bound in the change in the paths distributions, suggesting L-Lipschitzness. To further confirm this claim, we take the same set of worlds and find a linear projection (Section [A.1.3](#A1.SS1.SSS3 "A.1.3 Worlds linear projection ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")) that embeds our worlds in a 2-dimensional space, with each point coloured by its corresponding Renyi’s quadratic entropy H2subscriptH2\mathrm{H}\_{2}roman\_H start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT of the distribution of paths over its navigation graph, Figure [4](#S3.F4 "Figure 4 ‣ 3.1.2 World Smoothness ‣ 3.1 World Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") (bottom). We can see that the world space appears smooth. #### 3.1.3 World Diversity The world topology, jointly with object and player positions, allow one to express arbitrary navigation challenges, including various types of mazes, but also complex maps with difficult to access regions, and occluded visibility similar to the maps used in competitive first-person video games (Jaderberg et al., [2019](#bib.bib44)), see Figure [5](#S3.F5 "Figure 5 ‣ 3.1.2 World Smoothness ‣ 3.1 World Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents"). To illustrate diversity, one can see that the Cauchy-Schwarz Divergence between resistance distances distributions ρ𝜌\rhoitalic\_ρ as well as topology distances can be increased with relatively few local tile mutations (see Figure [6](#S3.F6 "Figure 6 ‣ 3.1.3 World Diversity ‣ 3.1 World Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents")). This confirms that, despite being a relatively smooth space, our world space spans diverse worlds, which can be found with local search methods (*e.g.* evolutionary algorithms). ![Refer to caption](/html/2107.12808/assets/figures/world_chars_mut.png) Figure 6: The distributions of distances between two worlds, 𝐰𝐰\mathbf{w}bold\_w and 𝐰′superscript𝐰′\mathbf{w}^{\prime}bold\_w start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, with different number of local tile mutations between them (colour). The distances are the navigational resistance (left) and topology distance (right). With only a few mutations the characteristics of the world can change a lot. ### 3.2 Game Properties Once multiple goals are combined to form a game, new complexity emerges – the ways in which the objectives of players interact, compete, and affect each other. This complex interaction is central to the fields of Game Theory and multi-agent RL (Shoham et al., [2007](#bib.bib90); Balduzzi et al., [2019](#bib.bib7)). To characterise the properties of games, we focus our analysis on three dimensions of games: the number of options, exploration difficulty, and their cooperative/competitive/balance aspects. The first property is the number of options in a given goal (and a game). Having multiple options for getting a reward in each timestep encourages players to be constantly evaluating the environment state, assessing which option is the more rewarding one. To define the more complex game properties, recall that every goal is a Boolean expression over a set of d𝑑ditalic\_d predicates ϕjsubscriptitalic-ϕ𝑗\phi\_{j}italic\_ϕ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT. Let us define ϕ:𝒮→{0,1}d:italic-ϕ→𝒮superscript01𝑑\phi:\mathcal{S}\rightarrow\{0,1\}^{d}italic\_ϕ : caligraphic\_S → { 0 , 1 } start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT, a mapping that assigns each simulation state 𝐬𝐬\mathbf{s}bold\_s to a binary vector of d𝑑ditalic\_d predicate truth values. A goal is simply a mapping from ϕ(𝒮)italic-ϕ𝒮\phi(\mathcal{S})italic\_ϕ ( caligraphic\_S ) to {0,1}01\{0,1\}{ 0 , 1 }, labelling which predicate states are rewarding. We denote by Nϕ:=#{ϕ(𝐬):𝐬∈𝒮}assignsubscript𝑁italic-ϕ#conditional-setitalic-ϕ𝐬𝐬𝒮N\_{\phi}:=\#\{\phi(\mathbf{s}):\mathbf{s}\in\mathcal{S}\}italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT := # { italic\_ϕ ( bold\_s ) : bold\_s ∈ caligraphic\_S } the size of the predicate state space. We define a distance metric between two goals 𝐠isubscript𝐠𝑖\mathbf{g}\_{i}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and 𝐠jsubscript𝐠𝑗\mathbf{g}\_{j}bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT as | | | | | --- | --- | --- | | | ‖𝐠i−𝐠j‖𝒢:=#{ϕ(𝐬):r𝐠i(𝐬)≠r𝐠j(𝐬)}Nϕ∈[0,1].assignsubscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢#conditional-setitalic-ϕ𝐬subscript𝑟subscript𝐠𝑖𝐬subscript𝑟subscript𝐠𝑗𝐬subscript𝑁italic-ϕ01\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}:=\frac{\#\{\phi(\mathbf{s}):r\_{\mathbf{g}\_{i}}(\mathbf{s})\neq r\_{\mathbf{g}\_{j}}(\mathbf{s})\}}{N\_{\phi}}\in[0,1].∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT := divide start\_ARG # { italic\_ϕ ( bold\_s ) : italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) ≠ italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) } end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG ∈ [ 0 , 1 ] . | | This distance between two goals is the fraction of different predicate evaluations where one goals is rewarding, and the other goal is not. Analogously, between two games | | | | | --- | --- | --- | | | ‖𝐆i−𝐆j‖𝔊:=1n∑k=1n‖(𝐆i)k−(𝐆j)k‖𝒢∈[0,1].assignsubscriptnormsubscript𝐆𝑖subscript𝐆𝑗𝔊1𝑛superscriptsubscript𝑘1𝑛subscriptnormsubscriptsubscript𝐆𝑖𝑘subscriptsubscript𝐆𝑗𝑘𝒢01\|\mathbf{G}\_{i}-\mathbf{G}\_{j}\|\_{\mathfrak{G}}:=\tfrac{1}{n}\sum\_{k=1}^{n}\|(\mathbf{G}\_{i})\_{k}-(\mathbf{G}\_{j})\_{k}\|\_{\mathcal{G}}\in[0,1].∥ bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT := divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_k = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∥ ( bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT - ( bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ∈ [ 0 , 1 ] . | | This leads to the following observation. {obs} (𝒢,∥⋅∥𝒢)(\mathcal{G},\|\cdot\|\_{\mathcal{G}})( caligraphic\_G , ∥ ⋅ ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ) and (𝔊,∥⋅∥𝔊)(\mathfrak{G},\|\cdot\|\_{\mathfrak{G}})( fraktur\_G , ∥ ⋅ ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT ) are metric spaces. In particular we have | | | | | --- | --- | --- | | | 𝐠i≡𝐠j⇔r𝐠i=r𝐠j⇔‖𝐠i−𝐠j‖𝒢=0iffsubscript𝐠𝑖subscript𝐠𝑗subscript𝑟subscript𝐠𝑖subscript𝑟subscript𝐠𝑗iffsubscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢0\mathbf{g}\_{i}\equiv\mathbf{g}\_{j}\iff r\_{\mathbf{g}\_{i}}=r\_{\mathbf{g}\_{j}}\iff\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}=0bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≡ bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ⇔ italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ⇔ ∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT = 0 | | | | | | | --- | --- | --- | | | 𝐆i≡𝐆j⇔‖𝐆i−𝐆j‖𝔊=0.iffsubscript𝐆𝑖subscript𝐆𝑗subscriptnormsubscript𝐆𝑖subscript𝐆𝑗𝔊0\mathbf{G}\_{i}\equiv\mathbf{G}\_{j}\iff\|\mathbf{G}\_{i}-\mathbf{G}\_{j}\|\_{\mathfrak{G}}=0.bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≡ bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ⇔ ∥ bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT = 0 . | | This allows us to define the next game property: exploration difficulty. ###### Definition 4. Exploration difficulty of a game is the fraction of predicate states in which no player is being rewarded. | | | | | --- | --- | --- | | | κ(𝐆)=κ((𝐠1,…,𝐠n))=#{ϕ(𝐬):∀kr𝐠k(𝐬)=0}Nϕ𝜅𝐆𝜅subscript𝐠1…subscript𝐠𝑛#conditional-setitalic-ϕ𝐬subscriptfor-all𝑘subscript𝑟subscript𝐠𝑘𝐬0subscript𝑁italic-ϕ\kappa(\mathbf{G})=\kappa((\mathbf{g}\_{1},\dots,\mathbf{g}\_{n}))=\frac{\#\{\phi(\mathbf{s}):\forall\_{k}r\_{\mathbf{g}\_{k}}(\mathbf{s})=0\}}{N\_{\phi}}italic\_κ ( bold\_G ) = italic\_κ ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) = divide start\_ARG # { italic\_ϕ ( bold\_s ) : ∀ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) = 0 } end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG | | we will also call the unnormalised exploration difficulty the quantity | | | | | --- | --- | --- | | | κ^(𝐆):=Nϕκ(𝐆).assign^𝜅𝐆subscript𝑁italic-ϕ𝜅𝐆\hat{\kappa}(\mathbf{G}):=N\_{\phi}\kappa(\mathbf{G}).over^ start\_ARG italic\_κ end\_ARG ( bold\_G ) := italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT italic\_κ ( bold\_G ) . | | One simple interpretation of this quantity is: assuming each of the predicates is independent and equally probable to be (dis-)satisfied at a given timestep, then 1−κ(𝐠)1𝜅𝐠1-\kappa(\mathbf{g})1 - italic\_κ ( bold\_g ) describes the probability of at least one player getting a reward. Consequently, we will refer to goals as trivial if κ(𝐠)∈{0,1}𝜅𝐠01\kappa(\mathbf{g})\in\{0,1\}italic\_κ ( bold\_g ) ∈ { 0 , 1 }, since these are goals where every policy is an optimal policy (similarly we say a game is trivial from the perspective of the main agent if κ(𝐠1)∈{0,1}𝜅subscript𝐠101\kappa(\mathbf{g}\_{1})\in\{0,1\}italic\_κ ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∈ { 0 , 1 }). {proprep} For every goal 𝐠𝐠\mathbf{g}bold\_g where κ(𝐠)=0𝜅𝐠0\kappa(\mathbf{g})=0italic\_κ ( bold\_g ) = 0 or κ(𝐠)=1𝜅𝐠1\kappa(\mathbf{g})=1italic\_κ ( bold\_g ) = 1 every policy is optimal. ###### Proof 3.1. Lets assume that κ(𝐠1)=z∈{0,1}𝜅subscript𝐠1𝑧01\kappa(\mathbf{g}\_{1})=z\in\{0,1\}italic\_κ ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = italic\_z ∈ { 0 , 1 } this means that for every state 𝐬𝐬\mathbf{s}bold\_s we have r𝐠(𝐬)=zsubscript𝑟𝐠𝐬𝑧r\_{\mathbf{g}}(\mathbf{s})=zitalic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) = italic\_z. Consequently, for every policy π𝜋\piitalic\_π we have 𝐕π(𝐱)=𝐕π((𝐰,(𝐠1,…,𝐠n),(π1,…,πn)))=Tzsubscript𝐕𝜋𝐱subscript𝐕𝜋𝐰subscript𝐠1normal-…subscript𝐠𝑛subscript𝜋1normal-…subscript𝜋𝑛𝑇𝑧\mathbf{V}\_{\pi}(\mathbf{x})=\mathbf{V}\_{\pi}((\mathbf{w},(\mathbf{g}\_{1},\dots,\mathbf{g}\_{n}),(\pi\_{1},\dots,\pi\_{n})))=Tzbold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) = bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( ( bold\_w , ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) , ( italic\_π start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) ) = italic\_T italic\_z, and so in particular ∀π𝐕π(𝐱)=Tz=maxπ⁡𝐕π(𝐱)=𝐕\*(𝐱)subscriptfor-all𝜋subscript𝐕𝜋𝐱𝑇𝑧subscript𝜋subscript𝐕𝜋𝐱superscript𝐕𝐱\forall\_{\pi}\mathbf{V}\_{\pi}(\mathbf{x})=Tz=\max\_{\pi}\mathbf{V}\_{\pi}(\mathbf{x})=\mathbf{V}^{\*}(\mathbf{x})∀ start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) = italic\_T italic\_z = roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) = bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_x ). Given exploration difficulty, we now define a new property – the notion of cooperativeness – that will assign a number between 0 and 1 to each game, where a game of cooperativeness 1 is going to be one where all players always get rewards jointly, and cooperativeness 0 when they can never both get a reward at the same timestep. ###### Definition 5. Cooperativeness is the fraction of predicate states in which all the players are being rewarded compared to the number of predicate states in which at least one of them is. | | | | | --- | --- | --- | | | coop(𝐆)=coop((𝐠1,…,𝐠n))=#{ϕ(𝐬):∀kr𝐠k(𝐬)=1}Nϕ−κ^(𝐆)coop𝐆coopsubscript𝐠1…subscript𝐠𝑛#conditional-setitalic-ϕ𝐬subscriptfor-all𝑘subscript𝑟subscript𝐠𝑘𝐬1subscript𝑁italic-ϕ^𝜅𝐆\mathrm{coop}(\mathbf{G})=\mathrm{coop}((\mathbf{g}\_{1},\dots,\mathbf{g}\_{n}))=\frac{\#\{\phi(\mathbf{s}):\forall\_{k}r\_{\mathbf{g}\_{k}}(\mathbf{s})=1\}}{N\_{\phi}-\hat{\kappa}(\mathbf{G})}roman\_coop ( bold\_G ) = roman\_coop ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) = divide start\_ARG # { italic\_ϕ ( bold\_s ) : ∀ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) = 1 } end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT - over^ start\_ARG italic\_κ end\_ARG ( bold\_G ) end\_ARG | | Symmetrically, competitiveness can be expressed as comp(𝐆)=1−coop(𝐆)comp𝐆1coop𝐆\mathrm{comp}(\mathbf{G})=1-\mathrm{coop}(\mathbf{G})roman\_comp ( bold\_G ) = 1 - roman\_coop ( bold\_G ) or more explicitly with the following definition. ###### Definition 6. Competitiveness is the fraction of predicate states in which some but not all players are being rewarded compared to the number of predicate states in which at least one of them is. | | | | | --- | --- | --- | | | comp((𝐠1,…,𝐠n))=#{ϕ(𝐬):maxk⁡r𝐠k(𝐬)≠mink⁡r𝐠k(𝐬)}Nϕ−κ^(𝐆)compsubscript𝐠1…subscript𝐠𝑛#conditional-setitalic-ϕ𝐬subscript𝑘subscript𝑟subscript𝐠𝑘𝐬subscript𝑘subscript𝑟subscript𝐠𝑘𝐬subscript𝑁italic-ϕ^𝜅𝐆\mathrm{comp}((\mathbf{g}\_{1},\dots,\mathbf{g}\_{n}))=\frac{\#\{\phi(\mathbf{s}):\max\_{k}r\_{\mathbf{g}\_{k}}(\mathbf{s})\neq\min\_{k}r\_{\mathbf{g}\_{k}}(\mathbf{s})\}}{N\_{\phi}-\hat{\kappa}(\mathbf{G})}roman\_comp ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) = divide start\_ARG # { italic\_ϕ ( bold\_s ) : roman\_max start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) ≠ roman\_min start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) } end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT - over^ start\_ARG italic\_κ end\_ARG ( bold\_G ) end\_ARG | | Finally, let us introduce the property of balance of a game. In game design, the issue of one player of the game having a constant advantage is a common one, referred to as an imbalance. Whilst fully symmetric, simultaneous moves games are fully balanced by construction, it is a complex problem to assess the degree of balance when the game is not symmetric, *i.e.* when the goals of each player are different. ###### Definition 7. Balance with respect to game transformations Ξ⊃{identity}normal-identitynormal-Ξ\Xi\supset\{\mathrm{identity}\}roman\_Ξ ⊃ { roman\_identity } is the maximal cooperativeness of the game when goals are transformed with elements of Ξnormal-Ξ\Xiroman\_Ξ: | | | | | --- | --- | --- | | | bal(𝐆)=maxξ∈Ξ⁡coop(ξ(𝐆)).bal𝐆subscript𝜉Ξcoop𝜉𝐆\displaystyle\mathrm{bal}(\mathbf{G})=\max\_{\xi\in\Xi}\mathrm{coop}(\xi(\mathbf{G})).roman\_bal ( bold\_G ) = roman\_max start\_POSTSUBSCRIPT italic\_ξ ∈ roman\_Ξ end\_POSTSUBSCRIPT roman\_coop ( italic\_ξ ( bold\_G ) ) . | | With the above definition it is easy to note that when Ξ={identity}Ξidentity\Xi=\{\mathrm{identity}\}roman\_Ξ = { roman\_identity } then balance is equivalent to cooperativeness. Consequently, balance can be seen as a relaxation of the notion of cooperation, under the assumption that some aspects of game rules are equivalent (equally hard). For XLand we note that colours of objects should have negligible effect on the complexity of a task, meaning that satisfying a predicate hold(me,yellow sphere) should be equally hard as hold(me,purple sphere). Consequently, we use ΞΞ\Xiroman\_Ξ to be the set of all bijective recolourings of objects in goals that are consistent across the entire goal. #### 3.2.1 Game Vastness Let us denote the number of unique atomic predicates as nasubscript𝑛𝑎n\_{a}italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT, the number of options a goal consists of as nosubscript𝑛𝑜n\_{o}italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT and the number of predicates in each option as ncsubscript𝑛𝑐n\_{c}italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT. There are exactly nanc⋅nosuperscriptsubscript𝑛𝑎⋅subscript𝑛𝑐subscript𝑛𝑜n\_{a}^{n\_{c}\cdot n\_{o}}italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT ⋅ italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT goals that differ in terms of their string representation, however many goals are equivalent such that | | | | | --- | --- | --- | | | 𝐠i≡𝐠j⇔r𝐠i=r𝐠j.iffsubscript𝐠𝑖subscript𝐠𝑗subscript𝑟subscript𝐠𝑖subscript𝑟subscript𝐠𝑗\mathbf{g}\_{i}\equiv\mathbf{g}\_{j}\iff r\_{\mathbf{g}\_{i}}=r\_{\mathbf{g}\_{j}}.bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≡ bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ⇔ italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT . | | For example, the goal of *seeing a purple sphere or not seeing a purple sphere* is equivalent to the goal of *holding a yellow cube or not holding a yellow cube*, both corresponding to r(𝐬)=1𝑟𝐬1r(\mathbf{s})=1italic\_r ( bold\_s ) = 1. Counting the exact number of unique r𝑟ritalic\_r functions that emerge from nisubscript𝑛𝑖n\_{i}italic\_n start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT options each being a conjunction of ncsubscript𝑛𝑐n\_{c}italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT out of nasubscript𝑛𝑎n\_{a}italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT predicates is a hard combinatorial problem, but under the assumption that each atomic predicate (apart from their negations) is independently solvable we can provide a lower bound of the number of unique goals. {thmrep} Under the assumption that each atomic predicate that does not involve negation is independently solvable, the number of unique n𝑛nitalic\_n-player games N𝔊subscript𝑁𝔊N\_{\mathfrak{G}}italic\_N start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT with respect to the reward functions they define satisfies: | | | | | --- | --- | --- | | | 1n![1no!∏i=1no((na/2−i⋅ncnc)2nc)]n≤N𝔊≤(nanc)n⋅no.1𝑛superscriptdelimited-[]1subscript𝑛𝑜superscriptsubscriptproduct𝑖1subscript𝑛𝑜binomialsubscript𝑛𝑎2⋅𝑖subscript𝑛𝑐subscript𝑛𝑐superscript2subscript𝑛𝑐𝑛subscript𝑁𝔊superscriptbinomialsubscript𝑛𝑎subscript𝑛𝑐⋅𝑛subscript𝑛𝑜\frac{1}{n!}\left[\frac{1}{n\_{o}!}\prod\_{i=1}^{n\_{o}}\left({n\_{a}/2-i\cdot n\_{c}\choose n\_{c}}2^{n\_{c}}\right)\right]^{n}\leq N\_{\mathfrak{G}}\leq{n\_{a}\choose n\_{c}}^{n\cdot n\_{o}}.divide start\_ARG 1 end\_ARG start\_ARG italic\_n ! end\_ARG [ divide start\_ARG 1 end\_ARG start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ! end\_ARG ∏ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( ( binomial start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT / 2 - italic\_i ⋅ italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_ARG ) 2 start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) ] start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ≤ italic\_N start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT ≤ ( binomial start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_ARG ) start\_POSTSUPERSCRIPT italic\_n ⋅ italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT . | | ###### Proof 3.2. The high level idea for the lower bound is to count the number of games where inside each goal every predicate is unique (and can only repeat across players). First, let us prove the statement for two goals, 𝐠isubscript𝐠𝑖\mathbf{g}\_{i}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and 𝐠jsubscript𝐠𝑗\mathbf{g}\_{j}bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT, where each has a corresponding set of predicates used (ϕiokcl)k,l=1no,nksuperscriptsubscriptsubscriptitalic-ϕ𝑖subscript𝑜𝑘subscript𝑐𝑙𝑘𝑙 1subscript𝑛𝑜subscript𝑛𝑘(\phi\_{io\_{k}c\_{l}})\_{k,l=1}^{n\_{o},n\_{k}}( italic\_ϕ start\_POSTSUBSCRIPT italic\_i italic\_o start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k , italic\_l = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT and (ϕjokcl)k,l=1no,ncsuperscriptsubscriptsubscriptitalic-ϕ𝑗subscript𝑜𝑘subscript𝑐𝑙𝑘𝑙 1subscript𝑛𝑜subscript𝑛𝑐(\phi\_{jo\_{k}c\_{l}})\_{k,l=1}^{n\_{o},n\_{c}}( italic\_ϕ start\_POSTSUBSCRIPT italic\_j italic\_o start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_c start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k , italic\_l = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT , italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, each being lexicographically sorted over options (indexed by oksubscript𝑜𝑘o\_{k}italic\_o start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, and over predicates inside each option (indexed by clsubscript𝑐𝑙c\_{l}italic\_c start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT), so that the option and alternatives orderings are irrelevant. If the two goals are different, this means that there exists k\*,l\*superscript𝑘superscript𝑙k^{\*},l^{\*}italic\_k start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_l start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT such that ϕik\*l\*≠ϕjk\*l\*subscriptitalic-ϕ𝑖superscript𝑘superscript𝑙subscriptitalic-ϕ𝑗superscript𝑘superscript𝑙\phi\_{ik^{\*}l^{\*}}\neq\phi\_{jk^{\*}l^{\*}}italic\_ϕ start\_POSTSUBSCRIPT italic\_i italic\_k start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT italic\_l start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ≠ italic\_ϕ start\_POSTSUBSCRIPT italic\_j italic\_k start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT italic\_l start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT. Let’s take the smallest such k\*superscript𝑘k^{\*}italic\_k start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT and a corresponding smallest l\*superscript𝑙l^{\*}italic\_l start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT. This means that there exists an option in one the goals, that the other goal does not possess. Without loss of generality, let us assume it is an option of 𝐠isubscript𝐠𝑖\mathbf{g}\_{i}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, meaning that ¬∃k′ϕjk′=ϕik\*subscriptsuperscript𝑘normal-′subscriptitalic-ϕ𝑗superscript𝑘normal-′subscriptitalic-ϕ𝑖superscript𝑘\neg\exists\_{k^{\prime}}\phi\_{jk^{\prime}}=\phi\_{ik^{\*}}¬ ∃ start\_POSTSUBSCRIPT italic\_k start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_j italic\_k start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT = italic\_ϕ start\_POSTSUBSCRIPT italic\_i italic\_k start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT. Since this option uses unique predicates across the goal, let us define s\*superscript𝑠s^{\*}italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT as a simulation state such that all the predicates of this option are true, while all the other predicates are false. Then we have | | | | | --- | --- | --- | | | r𝐠i(s\*)=1≠0=r𝐠j(s\*)subscript𝑟subscript𝐠𝑖superscript𝑠10subscript𝑟subscript𝐠𝑗superscript𝑠r\_{\mathbf{g}\_{i}}(s^{\*})=1\neq 0=r\_{\mathbf{g}\_{j}}(s^{\*})italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) = 1 ≠ 0 = italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) | | proving that r𝐠i≠r𝐠jsubscript𝑟subscript𝐠𝑖subscript𝑟subscript𝐠𝑗r\_{\mathbf{g}\_{i}}\neq r\_{\mathbf{g}\_{j}}italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ≠ italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and thus 𝐠i≢𝐠jnot-equivalent-tosubscript𝐠𝑖subscript𝐠𝑗\mathbf{g}\_{i}\not\equiv\mathbf{g}\_{j}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≢ bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT. The only thing left is to count such goals. For that, let us note that this is an iterative process, where for each i𝑖iitalic\_ith of nosubscript𝑛𝑜n\_{o}italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT options we can pick ncsubscript𝑛𝑐n\_{c}italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT out of na/2−i⋅ncsubscript𝑛𝑎2normal-⋅𝑖subscript𝑛𝑐n\_{a}/2-i\cdot n\_{c}italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT / 2 - italic\_i ⋅ italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT predicates to be used (since we already picked i⋅ncnormal-⋅𝑖subscript𝑛𝑐i\cdot n\_{c}italic\_i ⋅ italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT before, and we are not picking negations). Once we picked the predicates, each of them can be either itself or its negations, which introduces the 2ncsuperscript2subscript𝑛𝑐2^{n\_{c}}2 start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT factor. And since the process is order variant, we need to simply divide by the number of permutations of length nosubscript𝑛𝑜n\_{o}italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT, leading to | | | | | --- | --- | --- | | | 1no!∏i=1no((na/2−icn˙cnc)2nc)1subscript𝑛𝑜superscriptsubscriptproduct𝑖1subscript𝑛𝑜binomialsubscript𝑛𝑎2𝑖𝑐subscript˙𝑛𝑐subscript𝑛𝑐superscript2subscript𝑛𝑐\frac{1}{n\_{o}!}\prod\_{i=1}^{n\_{o}}\left({n\_{a}/2-ic\dot{n}\_{c}\choose n\_{c}}2^{n\_{c}}\right)divide start\_ARG 1 end\_ARG start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ! end\_ARG ∏ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( ( binomial start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT / 2 - italic\_i italic\_c over˙ start\_ARG italic\_n end\_ARG start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_ARG ) 2 start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) | | and completing the lower bound proof. The upper bound comes from simply noting that every reward function that comes from a Boolean expression with nosubscript𝑛𝑜n\_{o}italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT alternatives, each being a conjunction of ncsubscript𝑛𝑐n\_{c}italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT out of nasubscript𝑛𝑎n\_{a}italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT predicates has a corresponding Boolean expression of this form, and thus we can just count how many such expressions are there: | | | | | --- | --- | --- | | | (nanc)n⋅no,superscriptbinomialsubscript𝑛𝑎subscript𝑛𝑐⋅𝑛subscript𝑛𝑜{n\_{a}\choose n\_{c}}^{n\cdot n\_{o}},( binomial start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_ARG ) start\_POSTSUPERSCRIPT italic\_n ⋅ italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT , | | completing the upper bound proof. Figure [7](#S3.F7 "Figure 7 ‣ 3.2.1 Game Vastness ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") shows these bounds as functions of the number of options, atoms and conjunctions. As an example we see that with 3 options, each a conjunction of 3 predicates, using a set of 200 atomic predicates (the approximate number available in practice) gives us more than 1037superscript103710^{37}10 start\_POSTSUPERSCRIPT 37 end\_POSTSUPERSCRIPT unique 2-player games (composed of more than 1018superscript101810^{18}10 start\_POSTSUPERSCRIPT 18 end\_POSTSUPERSCRIPT goals) – a vast space of games. ![Refer to caption](/html/2107.12808/assets/figures/game_number_bounds.png) Figure 7: Bounds on the number of 2-player games provided by Theorem [3.2.1](#S3.Ex17 "3.2.1 Game Vastness ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") as functions of the number of options and atomic predicates. Our evaluation space (Section [4.2](#S4.SS2 "4.2 Evaluation Task Set ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents")) spans approximately 200 atomic predicates and up to 3 options. #### 3.2.2 Game Smoothness For our XLand task space to exhibit smoothness, the game space itself must be smooth: if we change our games by a small amount, the game properties of interest should only change by a small amount. We show that the interesting properties of our games are L-Lipschitz functions. {proprep} Exploration difficulty is a 1111-Lipschitz function, meaning that for any 𝐆i,𝐆jsubscript𝐆𝑖subscript𝐆𝑗\mathbf{G}\_{i},\mathbf{G}\_{j}bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT we have | | | | | --- | --- | --- | | | ‖κ(𝐆i)−κ(𝐆j)‖≤‖𝐆i−𝐆j‖𝔊.norm𝜅subscript𝐆𝑖𝜅subscript𝐆𝑗subscriptnormsubscript𝐆𝑖subscript𝐆𝑗𝔊\|\kappa(\mathbf{G}\_{i})-\kappa(\mathbf{G}\_{j})\|\leq\|\mathbf{G}\_{i}-\mathbf{G}\_{j}\|\_{\mathfrak{G}}.∥ italic\_κ ( bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_κ ( bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ∥ ≤ ∥ bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT . | | ###### Proof 3.3. We will show this with a proof by contradiction. Let us assume that the negation holds, meaning that there are two such games that | | | | | --- | --- | --- | | | ‖κ(𝐆i)−κ(𝐆j)‖>‖𝐆i−𝐆j‖𝔊,norm𝜅subscript𝐆𝑖𝜅subscript𝐆𝑗subscriptnormsubscript𝐆𝑖subscript𝐆𝑗𝔊\|\kappa(\mathbf{G}\_{i})-\kappa(\mathbf{G}\_{j})\|>\|\mathbf{G}\_{i}-\mathbf{G}\_{j}\|\_{\mathfrak{G}},∥ italic\_κ ( bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_κ ( bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ∥ > ∥ bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT , | | This would mean that | | | | | | --- | --- | --- | --- | | | ∥∥\displaystyle\|∥ | #{ϕ(𝐬):∀kr(𝐆i)k(𝐬)=0}−#{ϕ(𝐬):∀kr(𝐆j)k(𝐬)=0}∥\displaystyle\#\{\phi(\mathbf{s}):\forall\_{k}r\_{(\mathbf{G}\_{i})\_{k}}(\mathbf{s})=0\}-\#\{\phi(\mathbf{s}):\forall\_{k}r\_{(\mathbf{G}\_{j})\_{k}}(\mathbf{s})=0\}\|# { italic\_ϕ ( bold\_s ) : ∀ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT ( bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) = 0 } - # { italic\_ϕ ( bold\_s ) : ∀ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT ( bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) = 0 } ∥ | | | | | >‖𝐆i−𝐆j‖𝔊Nϕ.absentsubscriptnormsubscript𝐆𝑖subscript𝐆𝑗𝔊subscript𝑁italic-ϕ\displaystyle>\|\mathbf{G}\_{i}-\mathbf{G}\_{j}\|\_{\mathfrak{G}}N\_{\phi}.> ∥ bold\_G start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_G start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT . | | The left hand side of the inequality measures the difference in the number of non-rewarding states. The right hand side measures the difference in the number of states that simply have a different reward (and thus already includes those counted on the left hand side). Clearly the left hand side cannot be strictly bigger than the right. Contradiction. {thmrep} coop((⋅,𝐠′))coop⋅superscript𝐠′\mathrm{coop}((\cdot,\mathbf{g}^{\prime}))roman\_coop ( ( ⋅ , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) is a 11−k11𝑘\tfrac{1}{1-k}divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_k end\_ARG-Lipschitz function wrt. ∥⋅∥𝒢\|\cdot\|\_{\mathcal{G}}∥ ⋅ ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT for any 𝐠𝐠\mathbf{g}bold\_g such that κ((𝐠,𝐠′))=k𝜅𝐠superscript𝐠′𝑘\kappa((\mathbf{g},\mathbf{g}^{\prime}))=kitalic\_κ ( ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) = italic\_k. ###### Proof 3.4. Let as assume that | | | | | --- | --- | --- | | | ‖𝐠i−𝐠j‖𝒢=zNϕ.subscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢𝑧subscript𝑁italic-ϕ\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}=\tfrac{z}{N\_{\phi}}.∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT = divide start\_ARG italic\_z end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG . | | From the definition of the metric this means there are exactly z𝑧zitalic\_z predicate states where one of them is rewarding and the other is not. Let us denote by y𝑦yitalic\_y number of predicate states where both 𝐠isubscript𝐠𝑖\mathbf{g}\_{i}bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and 𝐠′superscript𝐠normal-′\mathbf{g}^{\prime}bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are rewarded. Then the number of predicate states where 𝐠jsubscript𝐠𝑗\mathbf{g}\_{j}bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT and 𝐠′superscript𝐠normal-′\mathbf{g}^{\prime}bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are rewarded has to belong to (y−z,y+z)𝑦𝑧𝑦𝑧(y-z,y+z)( italic\_y - italic\_z , italic\_y + italic\_z ). Now by denoting k^=kNϕnormal-^𝑘𝑘subscript𝑁italic-ϕ\hat{k}=kN\_{\phi}over^ start\_ARG italic\_k end\_ARG = italic\_k italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT we have | | | | | | --- | --- | --- | --- | | | ‖coop(𝐠i,𝐠′)−coop(𝐠j,𝐠′)‖(Nϕ−k^)normcoopsubscript𝐠𝑖superscript𝐠′coopsubscript𝐠𝑗superscript𝐠′subscript𝑁italic-ϕ^𝑘\displaystyle\|\mathrm{coop}(\mathbf{g}\_{i},\mathbf{g}^{\prime})-\mathrm{coop}(\mathbf{g}\_{j},\mathbf{g}^{\prime})\|(N\_{\phi}-\hat{k})∥ roman\_coop ( bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_coop ( bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∥ ( italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT - over^ start\_ARG italic\_k end\_ARG ) | ≤zabsent𝑧\displaystyle\leq z≤ italic\_z | | | | | =‖𝐠i−𝐠j‖𝒢Nϕ,absentsubscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢subscript𝑁italic-ϕ\displaystyle=\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}N\_{\phi},= ∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT , | | and thus | | | | | | --- | --- | --- | --- | | | | ‖coop(𝐠i,𝐠′)−coop(𝐠j,𝐠′)‖≤NϕNϕ−k^‖𝐠i−𝐠j‖𝒢normcoopsubscript𝐠𝑖superscript𝐠′coopsubscript𝐠𝑗superscript𝐠′subscript𝑁italic-ϕsubscript𝑁italic-ϕ^𝑘subscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢\displaystyle\|\mathrm{coop}(\mathbf{g}\_{i},\mathbf{g}^{\prime})-\mathrm{coop}(\mathbf{g}\_{j},\mathbf{g}^{\prime})\|\leq\frac{N\_{\phi}}{N\_{\phi}-\hat{k}}\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}∥ roman\_coop ( bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_coop ( bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∥ ≤ divide start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT - over^ start\_ARG italic\_k end\_ARG end\_ARG ∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT | | | | | =NϕNϕ−kNϕ‖𝐠i−𝐠j‖𝒢=11−k‖𝐠i−𝐠j‖𝒢absentsubscript𝑁italic-ϕsubscript𝑁italic-ϕ𝑘subscript𝑁italic-ϕsubscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢11𝑘subscriptnormsubscript𝐠𝑖subscript𝐠𝑗𝒢\displaystyle=\frac{N\_{\phi}}{N\_{\phi}-kN\_{\phi}}\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}=\frac{1}{1-k}\|\mathbf{g}\_{i}-\mathbf{g}\_{j}\|\_{\mathcal{G}}= divide start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT - italic\_k italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG ∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_k end\_ARG ∥ bold\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT | | It is natural to ask if the restriction imposed is not empty, but it is easy to prove that in the vicinity of any game there is another one satisfying said restriction. {proprep} For any game 𝐆=(𝐠,𝐠′)𝐆𝐠superscript𝐠normal-′\mathbf{G}=(\mathbf{g},\mathbf{g}^{\prime})bold\_G = ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) where κ(𝐆)=k>0𝜅𝐆𝑘0\kappa(\mathbf{G})=k>0italic\_κ ( bold\_G ) = italic\_k > 0 there exists a goal 𝐠′′superscript𝐠normal-′′\mathbf{g}^{\prime\prime}bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT such that κ((𝐠,𝐠′′))=k𝜅𝐠superscript𝐠normal-′′𝑘\kappa((\mathbf{g},\mathbf{g}^{\prime\prime}))=kitalic\_κ ( ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT ) ) = italic\_k and it is in vicinity of the previous game in the sense that | | | | | --- | --- | --- | | | ‖(𝐠,𝐠′′)−(𝐠,𝐠′)‖𝔊=12Nϕ.subscriptnorm𝐠superscript𝐠′′𝐠superscript𝐠′𝔊12subscript𝑁italic-ϕ\|(\mathbf{g},\mathbf{g}^{\prime\prime})-(\mathbf{g},\mathbf{g}^{\prime})\|\_{\mathfrak{G}}=\frac{1}{2N\_{\phi}}.∥ ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT ) - ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG 2 italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG . | | Without loss of generality let us assume that 𝐠𝐠\mathbf{g}bold\_g has at least one rewarding predicate state ϕ(𝐬\*)italic-ϕsuperscript𝐬\phi(\mathbf{s}^{\*})italic\_ϕ ( bold\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ). If ϕ(𝐬\*)italic-ϕsuperscript𝐬\phi(\mathbf{s}^{\*})italic\_ϕ ( bold\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) is also rewarding for 𝐠′superscript𝐠normal-′\mathbf{g}^{\prime}bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT then we define 𝐠′′superscript𝐠normal-′′\mathbf{g}^{\prime\prime}bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT as an exact copy of 𝐠′superscript𝐠normal-′\mathbf{g}^{\prime}bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, but set ϕ(𝐬\*)italic-ϕsuperscript𝐬\phi(\mathbf{s}^{\*})italic\_ϕ ( bold\_s start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) to be non rewarding thus the distance between the two is 1. If it was not rewarding in 𝐠′superscript𝐠normal-′\mathbf{g}^{\prime}bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT we symmetrically make it rewarding in 𝐠′′superscript𝐠normal-′′\mathbf{g}^{\prime\prime}bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT, again moving by 1 in the game space. The resulting game (𝐠,𝐠′′)𝐠superscript𝐠normal-′′(\mathbf{g},\mathbf{g}^{\prime\prime})( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT ) has κ((𝐠,𝐠′′))=k𝜅𝐠superscript𝐠normal-′′𝑘\kappa((\mathbf{g},\mathbf{g}^{\prime\prime}))=kitalic\_κ ( ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT ) ) = italic\_k since we did not add any new rewarding predicate states. In a natural way the same is true for competitiveness. {obs} comp((⋅,𝐠′))comp⋅superscript𝐠′\mathrm{comp}((\cdot,\mathbf{g}^{\prime}))roman\_comp ( ( ⋅ , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) is a 11−k11𝑘\tfrac{1}{1-k}divide start\_ARG 1 end\_ARG start\_ARG 1 - italic\_k end\_ARG-Lipschitz function wrt. ∥⋅∥𝒢\|\cdot\|\_{\mathcal{G}}∥ ⋅ ∥ start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT for any 𝐠𝐠\mathbf{g}bold\_g such that κ((𝐠,𝐠′))=k𝜅𝐠superscript𝐠′𝑘\kappa((\mathbf{g},\mathbf{g}^{\prime}))=kitalic\_κ ( ( bold\_g , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ) = italic\_k. Therefore, if we change one of the goals by a small amount, we have an upper bound on the change in exploration difficulty, cooperativeness, and competitiveness of the whole game. Figure [8](#S3.F8 "Figure 8 ‣ 3.2.2 Game Smoothness ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") verifies these properties empirically by showing the relation between the distance in game space compared to the change in competitiveness and exploration difficulty. We also provide a 2D projection of sample games using PCA, showing that these two properties are visible, suggesting they explain a lot of variance in game space. These examples show analytically and empirically the smoothness of game space. | | | | --- | --- | | Refer to caption | Refer to caption | | Refer to caption | Refer to caption | Figure 8: Empirical confirmation of game space smoothness with respect to exploration difficulty (κ𝜅\kappaitalic\_κ, left) and competitiveness (compcomp\mathrm{comp}roman\_comp, right). We took a single game, then created 1000 similar games by performing simple edits on one of its goals (𝐠′superscript𝐠′\mathbf{g}^{\prime}bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT) creating a new one (𝐠′′superscript𝐠′′\mathbf{g}^{\prime\prime}bold\_g start\_POSTSUPERSCRIPT ′ ′ end\_POSTSUPERSCRIPT)– removal of an option/relation, adding a new option/relation, substitution of a relation, *etc*. (Top) Each of these games corresponds to a point, with the x-axis being its distance from one randomly selected anchor game (𝐆,𝐆′)𝐆superscript𝐆′(\mathbf{G},\mathbf{G}^{\prime})( bold\_G , bold\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) (with the exploration difficulty reported in the title), and on the y-axis the difference in its compcomp\mathrm{comp}roman\_comp or κ𝜅\kappaitalic\_κ (colour corresponds symmetrically to κ𝜅\kappaitalic\_κ on compcomp\mathrm{comp}roman\_comp plot and vice versa). The pink line is the upper bound from Proposition [3.2.2](#S3.SS2.SSS2 "3.2.2 Game Smoothness ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") and Theorem [3.2.2](#S3.SS2.SSS2 "3.2.2 Game Smoothness ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents"). (Bottom) The matrix of pairwise distances between these games is computed, and PCA used to embed them on a plane, followed by representing compcomp\mathrm{comp}roman\_comp and κ𝜅\kappaitalic\_κ with a point colour. In both cases one can see very smooth transitions. #### 3.2.3 Game Diversity We have shown that the game space consists of vastly many games, and that small changes in their definitions lead to small changes in properties of interest. One missing aspect is to show how diverse this game space is, that eventually, after taking many small steps, one can change a game into a wildly different one. {thmrep} For every two player game 𝐆𝐆\mathbf{G}bold\_G such that κ^(𝐆)=k^𝜅𝐆𝑘\hat{\kappa}(\mathbf{G})=kover^ start\_ARG italic\_κ end\_ARG ( bold\_G ) = italic\_k and a desired change in competitiveness m∈(−comp(𝐆),1−comp(𝐆))𝑚comp𝐆1comp𝐆m\in(-\mathrm{comp}(\mathbf{G}),1-\mathrm{comp}(\mathbf{G}))italic\_m ∈ ( - roman\_comp ( bold\_G ) , 1 - roman\_comp ( bold\_G ) ) such that k|m|∈ℕ𝑘𝑚ℕk|m|\in\mathbb{N}italic\_k | italic\_m | ∈ blackboard\_N there exists a 𝐆′superscript𝐆′\mathbf{G}^{\prime}bold\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT such that comp(𝐆′)=comp(𝐆)+mcompsuperscript𝐆′comp𝐆𝑚\mathrm{comp}(\mathbf{G}^{\prime})=\mathrm{comp}(\mathbf{G})+mroman\_comp ( bold\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = roman\_comp ( bold\_G ) + italic\_m and ‖𝐆−𝐆′‖𝔊≤k|m|2.subscriptnorm𝐆superscript𝐆′𝔊𝑘𝑚2\|\mathbf{G}-\mathbf{G}^{\prime}\|\_{\mathcal{\mathfrak{G}}}\leq\tfrac{k|m|}{2}.∥ bold\_G - bold\_G start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∥ start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT ≤ divide start\_ARG italic\_k | italic\_m | end\_ARG start\_ARG 2 end\_ARG . ###### Proof 3.5. Let us first assume that m>0𝑚0m>0italic\_m > 0, consequently comp(𝐆)normal-comp𝐆\mathrm{comp}(\mathbf{G})roman\_comp ( bold\_G ) is smaller than 1, which means that if we look at 𝐆=(𝐠1,𝐠2)𝐆subscript𝐠1subscript𝐠2\mathbf{G}=(\mathbf{g}\_{1},\mathbf{g}\_{2})bold\_G = ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) we can find at least k⋅(1−m)normal-⋅𝑘1𝑚k\cdot(1-m)italic\_k ⋅ ( 1 - italic\_m ) predicate states, where r𝐠1(ϕ(𝐬)i)=r𝐠2(ϕ(𝐬)i)subscript𝑟subscript𝐠1italic-ϕsubscript𝐬𝑖subscript𝑟subscript𝐠2italic-ϕsubscript𝐬𝑖r\_{\mathbf{g}\_{1}}(\phi(\mathbf{s})\_{i})=r\_{\mathbf{g}\_{2}}(\phi(\mathbf{s})\_{i})italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ϕ ( bold\_s ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ϕ ( bold\_s ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ). Let us define | | | | | --- | --- | --- | | | 𝐠2′(ϕ(𝐬)):={1−𝐠2(ϕ(𝐬)) if ϕ(𝐬)∈{ϕ(𝐬)i}i=1mk𝐠2(ϕ(𝐬)) otherwise \mathbf{g}^{\prime}\_{2}(\phi(\mathbf{s})):=\left\{\begin{matrix}1-\mathbf{g}\_{2}(\phi(\mathbf{s}))\text{ if }\phi(\mathbf{s})\in\{\phi(\mathbf{s})\_{i}\}\_{i=1}^{mk}\\ \mathbf{g}\_{2}(\phi(\mathbf{s}))\text{ otherwise }\\ \end{matrix}\right.bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_ϕ ( bold\_s ) ) := { start\_ARG start\_ROW start\_CELL 1 - bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_ϕ ( bold\_s ) ) if italic\_ϕ ( bold\_s ) ∈ { italic\_ϕ ( bold\_s ) start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_m italic\_k end\_POSTSUPERSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_ϕ ( bold\_s ) ) otherwise end\_CELL end\_ROW end\_ARG | | By construction κ((𝐠1,𝐠2))=κ((𝐠1,𝐠2′))𝜅subscript𝐠1subscript𝐠2𝜅subscript𝐠1subscriptsuperscript𝐠normal-′2\kappa((\mathbf{g}\_{1},\mathbf{g}\_{2}))=\kappa((\mathbf{g}\_{1},\mathbf{g}^{\prime}\_{2}))italic\_κ ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) = italic\_κ ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) and comp((𝐠1,𝐠2))+m=comp((𝐠1,𝐠2′))normal-compsubscript𝐠1subscript𝐠2𝑚normal-compsubscript𝐠1subscriptsuperscript𝐠normal-′2\mathrm{comp}((\mathbf{g}\_{1},\mathbf{g}\_{2}))+m=\mathrm{comp}((\mathbf{g}\_{1},\mathbf{g}^{\prime}\_{2}))roman\_comp ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) + italic\_m = roman\_comp ( ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) and ‖(𝐠1,𝐠2)−(𝐠1,𝐠2′)‖=km2normsubscript𝐠1subscript𝐠2subscript𝐠1subscriptsuperscript𝐠normal-′2𝑘𝑚2\|(\mathbf{g}\_{1},\mathbf{g}\_{2})-(\mathbf{g}\_{1},\mathbf{g}^{\prime}\_{2})\|=\tfrac{km}{2}∥ ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) - ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , bold\_g start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ∥ = divide start\_ARG italic\_k italic\_m end\_ARG start\_ARG 2 end\_ARG. Proof for m<0𝑚0m<0italic\_m < 0 is analogous. To see qualitatively the diversity of games, we present a few examples of games showcasing a range of challenges imposed on players. ##### Simple navigation task XLand games include simple challenges such as a player being tasked with finding an object of interest and grabbing it. Tasks like this challenge navigational skills, perception, and basic manipulation. | | | | | | --- | --- | --- | --- | | | 𝐠1subscript𝐠1\displaystyle\mathbf{g}\_{1}bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | :=hold(me, yellow sphere)assignabsenthold(me, yellow sphere)\displaystyle:=\texttt{hold(me, yellow sphere)}:= hold(me, yellow sphere) | | | | 𝐠2subscript𝐠2\displaystyle\mathbf{g}\_{2}bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | :=near(me, yellow pyramid)assignabsentnear(me, yellow pyramid)\displaystyle:=\texttt{near(me, yellow pyramid)}:= near(me, yellow pyramid) | | | | κ(𝐆)𝜅𝐆\displaystyle\kappa(\mathbf{G})italic\_κ ( bold\_G ) | =14comp(𝐆)=23bal(𝐆)=715formulae-sequenceabsent14formulae-sequencecomp𝐆23bal𝐆715\displaystyle=\tfrac{1}{4}\;\;\;\;\mathrm{comp}(\mathbf{G})=\tfrac{2}{3}\;\;\;\;\mathrm{bal}(\mathbf{G})=\tfrac{7}{15}= divide start\_ARG 1 end\_ARG start\_ARG 4 end\_ARG roman\_comp ( bold\_G ) = divide start\_ARG 2 end\_ARG start\_ARG 3 end\_ARG roman\_bal ( bold\_G ) = divide start\_ARG 7 end\_ARG start\_ARG 15 end\_ARG | | ##### Simple cooperation game Setting the goal of both players to be identical gives a fully cooperative, balanced game, which challenges a player’s ability to navigate and manipulate objects, but also to synchronise and work together. | | | | | | --- | --- | --- | --- | | | 𝐠1subscript𝐠1\displaystyle\mathbf{g}\_{1}bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | :=near(yellow pyramid, yellow sphere)assignabsentnear(yellow pyramid, yellow sphere)\displaystyle:=\texttt{near(yellow pyramid, yellow sphere)}:= near(yellow pyramid, yellow sphere) | | | | 𝐠2subscript𝐠2\displaystyle\mathbf{g}\_{2}bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | :=near(yellow pyramid, yellow sphere)assignabsentnear(yellow pyramid, yellow sphere)\displaystyle:=\texttt{near(yellow pyramid, yellow sphere)}:= near(yellow pyramid, yellow sphere) | | | | κ(𝐆)𝜅𝐆\displaystyle\kappa(\mathbf{G})italic\_κ ( bold\_G ) | =12comp(𝐆)=0bal(𝐆)=1formulae-sequenceabsent12formulae-sequencecomp𝐆0bal𝐆1\displaystyle=\tfrac{1}{2}\;\;\;\;\mathrm{comp}(\mathbf{G})=0\;\;\;\;\mathrm{bal}(\mathbf{G})=1= divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG roman\_comp ( bold\_G ) = 0 roman\_bal ( bold\_G ) = 1 | | ##### Hide and Seek A well known game of hiding and seeking, that has been used in the past as a source of potentially complex behaviours (Baker et al., [2020](#bib.bib5)). This is an example of a simple, fully competitive, imbalanced game in XLand. | | | | | | --- | --- | --- | --- | | | 𝐠1subscript𝐠1\displaystyle\mathbf{g}\_{1}bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | :=see(me, opponent)assignabsentsee(me, opponent)\displaystyle:=\texttt{see(me, opponent)}:= see(me, opponent) | | | | 𝐠2subscript𝐠2\displaystyle\mathbf{g}\_{2}bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | :=not(see(opponent, me))assignabsentnot(see(opponent, me))\displaystyle:=\texttt{not(see(opponent, me))}:= not(see(opponent, me)) | | | | κ(𝐆)𝜅𝐆\displaystyle\kappa(\mathbf{G})italic\_κ ( bold\_G ) | =0comp(𝐆)=1bal(𝐆)=13formulae-sequenceabsent0formulae-sequencecomp𝐆1bal𝐆13\displaystyle=0\;\;\;\;\mathrm{comp}(\mathbf{G})=1\;\;\;\;\mathrm{bal}(\mathbf{G})=\tfrac{1}{3}= 0 roman\_comp ( bold\_G ) = 1 roman\_bal ( bold\_G ) = divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG | | ##### Capture the Cube The competitive game of Capture the Flag has been shown to be as a rich environment for agents to learn to interact with a complex 3d world, coordinate and compete (Jaderberg et al., [2019](#bib.bib44)). Each player must get the flag (for example represented as a cube) to their base floor to score reward. An example one-flag instantiation of this game in XLand (with a supporting world) is | | | | | | | | --- | --- | --- | --- | --- | --- | | | 𝐠1subscript𝐠1\displaystyle\mathbf{g}\_{1}bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | on(black cube, blue floor)∧limit-fromon(black cube, blue floor)\displaystyle\texttt{on(black cube, blue floor)}\wedgeon(black cube, blue floor) ∧ | | | | | not(on(black cube, red floor)) | | | | 𝐠2subscript𝐠2\displaystyle\mathbf{g}\_{2}bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | on(black cube, red floor)∧limit-fromon(black cube, red floor)\displaystyle\texttt{on(black cube, red floor)}\wedgeon(black cube, red floor) ∧ | | | | | not(on(black cube, blue floor)) | | | | κ(𝐆)𝜅𝐆\displaystyle\kappa(\mathbf{G})italic\_κ ( bold\_G ) | =\displaystyle== | | 14comp(𝐆)=1bal(𝐆)=1formulae-sequence14comp𝐆 1bal𝐆1\displaystyle\tfrac{1}{4}\;\;\;\;\mathrm{comp}(\mathbf{G})=1\;\;\;\;\mathrm{bal}(\mathbf{G})=1divide start\_ARG 1 end\_ARG start\_ARG 4 end\_ARG roman\_comp ( bold\_G ) = 1 roman\_bal ( bold\_G ) = 1 | | ##### XRPS A final example is that of XRPS games, inspired by the study of non-transitivites in games leading to strategic depth (Vinyals et al., [2019](#bib.bib107); Czarnecki et al., [2020](#bib.bib20)). We give each player three options to choose from, each one being explicitly countered by exactly one other option. A player can choose to pick up a yellow sphere, but it will get a reward if and only if an opponent is not holding a purple sphere; if it picks up a purple sphere the reward will be given if and only if the opponent does not pick up a black sphere, and so on. With these cyclic rules, players are encouraged not only to navigate and perceive their environment, but also to be aware of opponent actions and strategies, and to try to actively counter potential future behaviours, leading to potentially complex, time-extended dynamics. | | | | | | | | --- | --- | --- | --- | --- | --- | | | 𝐠^rocksubscript^𝐠rock\displaystyle\widehat{\mathbf{g}}\_{\mathrm{rock}}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_rock end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | hold(me,yellow sphere)∧limit-fromhold(me,yellow sphere)\displaystyle\texttt{hold(me,yellow sphere)}\wedgehold(me,yellow sphere) ∧ | | | | | not(hold(opponent,yellow sphere))∧limit-fromnot(hold(opponent,yellow sphere))\displaystyle\texttt{not(hold(opponent,yellow sphere))}\wedgenot(hold(opponent,yellow sphere)) ∧ | | | | | not(hold(opponent,purple sphere)) | | | | 𝐠^papersubscript^𝐠paper\displaystyle\widehat{\mathbf{g}}\_{\mathrm{paper}}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_paper end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | hold(me,purple sphere)∧limit-fromhold(me,purple sphere)\displaystyle\texttt{hold(me,purple sphere)}\wedgehold(me,purple sphere) ∧ | | | | | not(hold(opponent,purple sphere))∧limit-fromnot(hold(opponent,purple sphere))\displaystyle\texttt{not(hold(opponent,purple sphere))}\wedgenot(hold(opponent,purple sphere)) ∧ | | | | | not(hold(opponent,black sphere)) | | | | 𝐠^scissorssubscript^𝐠scissors\displaystyle\widehat{\mathbf{g}}\_{\mathrm{scissors}}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_scissors end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | hold(me,black sphere)∧limit-fromhold(me,black sphere)\displaystyle\texttt{hold(me,black sphere)}\wedgehold(me,black sphere) ∧ | | | | | not(hold(opponent,black sphere))∧limit-fromnot(hold(opponent,black sphere))\displaystyle\texttt{not(hold(opponent,black sphere))}\wedgenot(hold(opponent,black sphere)) ∧ | | | | | not(hold(opponent,yellow sphere)) | | | | 𝐠1subscript𝐠1\displaystyle\mathbf{g}\_{1}bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | 𝐠^rock∨𝐠^paper∨𝐠^scissorssubscript^𝐠rocksubscript^𝐠papersubscript^𝐠scissors\displaystyle\widehat{\mathbf{g}}\_{\mathrm{rock}}\vee\widehat{\mathbf{g}}\_{\mathrm{paper}}\vee\widehat{\mathbf{g}}\_{\mathrm{scissors}}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_rock end\_POSTSUBSCRIPT ∨ over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_paper end\_POSTSUBSCRIPT ∨ over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_scissors end\_POSTSUBSCRIPT | | | | 𝐠2subscript𝐠2\displaystyle\mathbf{g}\_{2}bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT | :=assign\displaystyle:=:= | | 𝐠^rock∨𝐠^paper∨𝐠^scissorssubscript^𝐠rocksubscript^𝐠papersubscript^𝐠scissors\displaystyle\widehat{\mathbf{g}}\_{\mathrm{rock}}\vee\widehat{\mathbf{g}}\_{\mathrm{paper}}\vee\widehat{\mathbf{g}}\_{\mathrm{scissors}}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_rock end\_POSTSUBSCRIPT ∨ over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_paper end\_POSTSUBSCRIPT ∨ over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT roman\_scissors end\_POSTSUBSCRIPT | | | | κ(𝐆)𝜅𝐆\displaystyle\kappa(\mathbf{G})italic\_κ ( bold\_G ) | =\displaystyle== | | 14comp(𝐆)=1bal(𝐆)=1formulae-sequence14comp𝐆 1bal𝐆1\displaystyle\tfrac{1}{4}\;\;\;\;\mathrm{comp}(\mathbf{G})=1\;\;\;\;\mathrm{bal}(\mathbf{G})=1divide start\_ARG 1 end\_ARG start\_ARG 4 end\_ARG roman\_comp ( bold\_G ) = 1 roman\_bal ( bold\_G ) = 1 | | 4 Goal and Metric ------------------ ![Refer to caption](/html/2107.12808/assets/figures/Normalised_percentile.png) Figure 9: The process of computing normalised percentiles. Tasks vary significantly in terms of their complexity, some have much higher values of optimal policies than others. We normalise the performance of the agent by an estimate of an optimal policy score – using the Nash equilibrium of trained agents – providing a normalised score, which after ordering creates a normalised percentile curve. This can be iteratively updated as new trained agents are created. In Section [2](#S2 "2 XLand Environment Space ‣ Open-Ended Learning Leads to Generally Capable Agents") we introduced the XLand environment and explored some of the properties of this space in Section [3](#S3 "3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents") such as the vastness, diversity, and smoothness across tasks. We now turn our attention to training an agent on XLand. To train an agent π𝜋\piitalic\_π in an episodic environment such as XLand, one generally aims to maximise the expected return of the agent | | | | | --- | --- | --- | | | 𝐕π(Pπ):=𝔼Pπ(ℵ)[Rπ(𝐱)].assignsubscript𝐕𝜋subscriptP𝜋subscript𝔼subscriptP𝜋ℵdelimited-[]subscript𝑅𝜋𝐱\mathbf{V}\_{\pi}(\mathrm{P}\_{\pi}):=\mathbb{E}\_{\mathrm{P}\_{\pi}(\aleph)}\left[R\_{\pi}(\mathbf{x})\right].bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ) := blackboard\_E start\_POSTSUBSCRIPT roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( roman\_ℵ ) end\_POSTSUBSCRIPT [ italic\_R start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) ] . | | where PπsubscriptP𝜋\mathrm{P}\_{\pi}roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT is an agent-specific distribution over tasks. A challenge in evaluating the performance of an agent in this massively multitask environment comes from the fact that each task can be of completely different complexity. The optimal value | | | | | --- | --- | --- | | | 𝐕\*(𝐱):=maxπ⁡𝐕π(𝐱)assignsuperscript𝐕𝐱subscript𝜋subscript𝐕𝜋𝐱\mathbf{V}^{\*}(\mathbf{x}):=\max\_{\pi}\mathbf{V}\_{\pi}(\mathbf{x})bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_x ) := roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) | | of one task††𝐕π(𝐱)subscript𝐕𝜋𝐱\mathbf{V}\_{\pi}(\mathbf{x})bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) is interpreted as an expectation over the Dirac delta distribution around 𝐱𝐱\mathbf{x}bold\_x. can be of a different order of magnitude than the optimal value of another task 𝐕\*(𝐱′)superscript𝐕superscript𝐱′\mathbf{V}^{\*}(\mathbf{x}^{\prime})bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ), *i.e.* 𝐕\*(𝐱)≫𝐕\*(𝐱′).much-greater-thansuperscript𝐕𝐱superscript𝐕superscript𝐱′\mathbf{V}^{\*}(\mathbf{x})\gg\mathbf{V}^{\*}(\mathbf{x}^{\prime}).bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_x ) ≫ bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) . Consequently, simply averaging the agent’s value across all tasks to form a single score will overemphasise tasks with large optimal values. Even if one was able to sensibly normalise value per-task, with a big enough set of tasks, averaging will remove relevant information regarding agent failures. For example, averaging will not surface an agent’s failure modes on some tasks if these tasks do not occupy a big part of the task space (Balduzzi et al., [2018](#bib.bib6)). This becomes an even bigger issue if there is no particular ground truth test distribution of interest, but rather our goal is to find a policy that is generally capable. A Game Theoretic solution would be to focus on the infimum performance (the worst-case scenario (Nash et al., [1950](#bib.bib66))), since performance on this task will always lower bound any expectation over a distribution defined over the same set. Unfortunately, the infimum suffers from not providing any notion of progress or learning signal if there are any tasks that are simply impossible or extremely hard. ### 4.1 Normalised Percentiles In this work we seek to create *generally capable* agents in the whole XLand task space. General capability is not strictly defined but has some desiderata: * • Agents should catastrophically fail on as few tasks as possible. * • Agents should be competent on as many tasks as possible. * • Broad ability is preferred over narrow competency. These desiderata cannot be encapsulated by a single number describing an agent’s performance, as they do not define a total order (Balduzzi et al., [2019](#bib.bib7)). We move away from characterising agents purely by expected return, and instead consider the distribution of returns over a countable task space. However, for large task spaces this is a very high-dimensional object. In addition, due to the drastically different return scales of tasks, returns cannot be compared and one needs knowledge of each individual task to interpret the significance of reward. Naturally, one could normalise the return per task by the return of the optimal policy on each specific task. However, in practice: * • an optimal policy is not known a priori, * • we want to use these normalisers over the entire environment space, which means that we need to know a single optimal policy for the entire space, and then normalise by its score on each task. In order to address these issues we follow two practical simplifications. First, to address the need of having one optimal policy for the entire space, we compute the normalisation factor for each game independently, and then combine them into a global normaliser. Second, even with the above simplification we do not have access to an optimal policy per game. However, we can take ideas from multi-agent training algorithms that eventually converge to a Nash equilibrium (Heinrich and Silver, [2016](#bib.bib38); Marris et al., [2021](#bib.bib59); McMahan et al., [2003](#bib.bib61)). We iteratively build a set of agents that are capable of solving a specific goal, and use the best mixture (Nash equilibrium) of them as a normalising constant. As training progresses and our agents become better at satisfying the goal, they will beat the existing Nash equilibrium and improve the normalising constant. This dynamic provides us with an iterative notion of improvement for a multi-task environment, rather than a fixed numerical quantity to describe progress. It is akin to theoretical results showing that in multi-agent problems it is impossible to have a fixed objective, because finding better agents and improving the quality of evaluation are the same problem (Garnelo et al., [2021](#bib.bib29)). These normalisers give us a normalised score per task. Finally, to mitigate the problem of having a high-dimensional normalised score distribution, we characterise the distribution in terms of the percentiles of normalised score, up to a maximum of the 50th percentile (median normalised score): | | | | | | --- | --- | --- | --- | | | perf(π|𝐠,𝚷t)perfconditional𝜋𝐠subscript𝚷𝑡\displaystyle\mathrm{perf}(\pi|\mathbf{g},\boldsymbol{\Pi}\_{t})roman\_perf ( italic\_π | bold\_g , bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | :=min(πj,𝐠j)⁡𝔼𝐰[Rπ(𝐰,(𝐠,𝐠2,…𝐠n),(π2,…πn))]assignabsentsubscriptsubscript𝜋𝑗subscript𝐠𝑗subscript𝔼𝐰delimited-[]subscript𝑅𝜋𝐰𝐠subscript𝐠2…subscript𝐠𝑛subscript𝜋2…subscript𝜋𝑛\displaystyle:=\min\_{(\pi\_{j},\mathbf{g}\_{j})}\mathbb{E}\_{\mathbf{w}}\left[R\_{\pi}(\mathbf{w},(\mathbf{g},\mathbf{g}\_{2},...\mathbf{g}\_{n}),(\pi\_{2},...\pi\_{n}))\right]:= roman\_min start\_POSTSUBSCRIPT ( italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , bold\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT bold\_w end\_POSTSUBSCRIPT [ italic\_R start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_w , ( bold\_g , bold\_g start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … bold\_g start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) , ( italic\_π start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … italic\_π start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ) ] | | | | norm(𝐠|𝚷t)normconditional𝐠subscript𝚷𝑡\displaystyle\mathrm{norm}(\mathbf{g}|\boldsymbol{\Pi}\_{t})roman\_norm ( bold\_g | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | :=maxπ⁡perf(π|𝐠,𝚷t)=NashValue(𝐠|𝚷t)assignabsentsubscript𝜋perfconditional𝜋𝐠subscript𝚷𝑡NashValueconditional𝐠subscript𝚷𝑡\displaystyle:=\max\_{\pi}\mathrm{perf}(\pi|\mathbf{g},\boldsymbol{\Pi}\_{t})=\mathrm{NashValue}(\mathbf{g}|\boldsymbol{\Pi}\_{t}):= roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT roman\_perf ( italic\_π | bold\_g , bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = roman\_NashValue ( bold\_g | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | | | | perf^(π|𝐠,𝚷t)^perfconditional𝜋𝐠subscript𝚷𝑡\displaystyle\widehat{\mathrm{perf}}(\pi|\mathbf{g},\boldsymbol{\Pi}\_{t})over^ start\_ARG roman\_perf end\_ARG ( italic\_π | bold\_g , bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | :=perf(π|𝐠,𝚷t)norm(𝐠|𝚷t)∈[0,1]assignabsentperfconditional𝜋𝐠subscript𝚷𝑡normconditional𝐠subscript𝚷𝑡01\displaystyle:=\frac{\mathrm{perf}(\pi|\mathbf{g},\boldsymbol{\Pi}\_{t})}{\mathrm{norm}(\mathbf{g}|\boldsymbol{\Pi}\_{t})}\in[0,1]:= divide start\_ARG roman\_perf ( italic\_π | bold\_g , bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG roman\_norm ( bold\_g | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG ∈ [ 0 , 1 ] | | | | perc(π|𝚷t)[k]percconditional𝜋subscript𝚷𝑡delimited-[]𝑘\displaystyle\mathrm{perc}(\pi|\boldsymbol{\Pi}\_{t})[k]roman\_perc ( italic\_π | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) [ italic\_k ] | :=𝒫k(perf^(π|𝐠,𝚷t)), for k∈{0,…50}formulae-sequenceassignabsentsubscript𝒫𝑘^perfconditional𝜋𝐠subscript𝚷𝑡 for 𝑘0…50\displaystyle:=\mathcal{P}\_{k}(\widehat{\mathrm{perf}}(\pi|\mathbf{g},\boldsymbol{\Pi}\_{t})),\;\;\text{ for }k\in\{0,\dots 50\}:= caligraphic\_P start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( over^ start\_ARG roman\_perf end\_ARG ( italic\_π | bold\_g , bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) , for italic\_k ∈ { 0 , … 50 } | | where 𝒫ksubscript𝒫𝑘\mathcal{P}\_{k}caligraphic\_P start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is the k𝑘kitalic\_kth percentile and both min and max operations over policies operate over convex combinations††Rαπ+(1−α)π′(𝐱):=αRπ(𝐱)+(1−α)Rπ′(𝐱)assignsubscript𝑅𝛼𝜋1𝛼superscript𝜋′𝐱𝛼subscript𝑅𝜋𝐱1𝛼subscript𝑅superscript𝜋′𝐱R\_{\alpha\pi+(1-\alpha)\pi^{\prime}}(\mathbf{x}):=\alpha R\_{\pi}(\mathbf{x})+(1-\alpha)R\_{\pi^{\prime}}(\mathbf{x})italic\_R start\_POSTSUBSCRIPT italic\_α italic\_π + ( 1 - italic\_α ) italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( bold\_x ) := italic\_α italic\_R start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) + ( 1 - italic\_α ) italic\_R start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( bold\_x ) of policies from a corresponding population 𝚷tsubscript𝚷𝑡\boldsymbol{\Pi}\_{t}bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Figure [9](#S4.F9 "Figure 9 ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents") illustrates this process. Each agent’s performance is described as 51 numbers between 0 and 1, with each number being the normalised score at each integer percentile in the range of 0 to 50 (inclusive), which forms a non-decreasing sequence | | | | | --- | --- | --- | | | perc(π)[k+1]≥perc(π)[k].perc𝜋delimited-[]𝑘1perc𝜋delimited-[]𝑘\mathrm{perc}(\pi)[k+1]\geq\mathrm{perc}(\pi)[k].roman\_perc ( italic\_π ) [ italic\_k + 1 ] ≥ roman\_perc ( italic\_π ) [ italic\_k ] . | | One can read out various human interpretable quantities from this representation, e.g. perf(π)[0]perf𝜋delimited-[]0\mathrm{perf}(\pi)[0]roman\_perf ( italic\_π ) [ 0 ] is the infimum – the normalised score an agent obtains on the hardest game; perf(π)[50]perf𝜋delimited-[]50\mathrm{perf}(\pi)[50]roman\_perf ( italic\_π ) [ 50 ] is the median normalised performance; the smallest k𝑘kitalic\_k such that perf(π)[k]>0perf𝜋delimited-[]𝑘0\mathrm{perf}(\pi)[k]>0roman\_perf ( italic\_π ) [ italic\_k ] > 0 informs us that an agent scores any reward in at least (100−k)%percent100𝑘(100-k)\%( 100 - italic\_k ) % of games (and thus provides a notion of coverage/participation). We say an agent π𝜋\piitalic\_π is better than agent π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT if and only if it achieves at least as good a score for every percentile, and on at least one percentile it achieves a strictly better score, formally: | | | | | | --- | --- | --- | --- | | | π⪰𝚷tπ′⇔iffsubscriptsucceeds-or-equalssubscript𝚷𝑡𝜋superscript𝜋′absent\displaystyle\pi\succeq\_{\boldsymbol{\Pi}\_{t}}\pi^{\prime}\iffitalic\_π ⪰ start\_POSTSUBSCRIPT bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⇔ | ∀kperc(π|𝚷t)[k]≥perc(π′|𝚷t)[k]subscriptfor-all𝑘percconditional𝜋subscript𝚷𝑡delimited-[]𝑘percconditionalsuperscript𝜋′subscript𝚷𝑡delimited-[]𝑘\displaystyle\forall\_{k}\mathrm{perc}(\pi|\boldsymbol{\Pi}\_{t})[k]\geq\mathrm{perc}(\pi^{\prime}|\boldsymbol{\Pi}\_{t})[k]∀ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT roman\_perc ( italic\_π | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) [ italic\_k ] ≥ roman\_perc ( italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) [ italic\_k ] | | | | π≻𝚷tπ′⇔iffsubscriptsucceedssubscript𝚷𝑡𝜋superscript𝜋′absent\displaystyle\pi\succ\_{\boldsymbol{\Pi}\_{t}}\pi^{\prime}\iffitalic\_π ≻ start\_POSTSUBSCRIPT bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⇔ | ∃kperc(π|𝚷t)[k]>perc(π′|𝚷t)[k]subscript𝑘percconditional𝜋subscript𝚷𝑡delimited-[]𝑘percconditionalsuperscript𝜋′subscript𝚷𝑡delimited-[]𝑘\displaystyle\exists\_{k}\mathrm{perc}(\pi|\boldsymbol{\Pi}\_{t})[k]>\mathrm{perc}(\pi^{\prime}|\boldsymbol{\Pi}\_{t})[k]∃ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT roman\_perc ( italic\_π | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) [ italic\_k ] > roman\_perc ( italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) [ italic\_k ] | | | | | ∧π⪰𝚷tπ′.subscriptsucceeds-or-equalssubscript𝚷𝑡𝜋superscript𝜋′\displaystyle\wedge\pi\succeq\_{\boldsymbol{\Pi}\_{t}}\pi^{\prime}.∧ italic\_π ⪰ start\_POSTSUBSCRIPT bold\_Π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT . | | Let us refer to our desiderata – if agent π𝜋\piitalic\_π fails catastrophically (never achieves any reward) on fewer tasks than π′superscript𝜋′\pi^{\prime}italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT then it will have non-zero values on a larger number of percentiles, and thus captured in our notion of being better. Conversely, if catastrophic failures are more common, then π𝜋\piitalic\_π will not be considered better (it can be non-comparable or worse). The notion of competency refers to the fraction of the score obtained by the Nash equilibrium over known policies, and thus similarly by being competent on more tasks, π𝜋\piitalic\_π will increase its values on smaller percentiles. Finally, a narrow competency will be visible in low scores over low percentiles, and despite high scores being obtained on high percentiles – such an agent will not be considered better. In addition, cutting our percentiles at 50 means that an agent that is an expert on less than half of the games, but does not score any points on remaining ones, will be considered worse than an agent of broader ability. To summarise, we propose to use the following tools to measure and drive progress of general capabilities of agents: * • to normalise performance by the estimated highest obtainable score, * • to iteratively improve the estimate of the highest obtainable score, * • to evaluate agents across normalised score percentiles, creating a multi-dimensional performance descriptor, * • to require Pareto dominance over said descriptor to guarantee improvements with respect to our desiderata. ### 4.2 Evaluation Task Set The normalised percentile metric described in the previous section provides a way to compare agents and drive learning with a lens towards general capability. However, this metric is still evaluated with respect to a distribution of tasks PℵsubscriptPℵ\mathrm{P}\_{\aleph}roman\_P start\_POSTSUBSCRIPT roman\_ℵ end\_POSTSUBSCRIPT. The XLand task space as defined in Section [2](#S2 "2 XLand Environment Space ‣ Open-Ended Learning Leads to Generally Capable Agents") is prohibitively large, and as such we need to create a manageable evaluation task set against which to assess agents’ general capability. Given a budget number of evaluation tasks (*e.g.* on the order of thousands), arbitrarily sampling tasks from ℵℵ\alephroman\_ℵ could risk critically underrepresenting the vastness and diversity of the underlying task space, with aliasing also hiding the smoothness property. As such, we define an evaluation task space that samples tasks spanning a smaller but representative subspace of XLand tasks, and skew sampling to ensure uniform coverage of interesting world and game features. Finally, we combine these evaluation worlds and games with pretrained evaluation policies to give us an evaluation task set. ##### Evaluation worlds For evaluation, we want a set of worlds that expose agents to a range of topological challenges. We use a world-agent co-evolution process (Section [A.1.1](#A1.SS1.SSS1 "A.1.1 Procedural World Generation ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"), Figure [32](#A1.F32 "Figure 32 ‣ World-agent co-evolution ‣ A.1.1 Procedural World Generation ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")), saving the training distribution of worlds created at each point in training of this process. This gives a collection of worlds where the earlier-created worlds are generally topologically simpler than those created later in training. Uniformly sampling this collection of worlds with respect to the creation time gives a set of worlds spanning the range of topological complexity (as defined by an agent learning to find an object). We also randomly apply reflections and resampling of object positions to this set of worlds. Finally, we add additional Wave Function Collapse (Gumin, [2016](#bib.bib34)) generated worlds, biased towards specific topological elements that we observe rarely: ones containing single central islands and door-like bottlenecks separating play areas. The gadget of each player is uniformly sampled and the colour ordering of each player randomly permuted. Exactly 12 objects are placed into each evaluation world, one of each colour-shape combination. ##### Evaluation games In the game space, we look to create a set of evaluation games that span a large range of complexity and expressivity, but are still logically simple enough for quick human understanding. Therefore, representing the goals of the game in their *disjunctive normal form*, we restrict the evaluation games to have at most three options per goal, with each option composed of at most three predicates, and a maximum of six unique predicates used across all goals. Only two- and three-player games are considered in the evaluation set. Additionally, we ensure the evaluation set of games spans the range of competitiveness and balance (defined in Section [3](#S3 "3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents")) – we create discrete buckets in competitiveness-balance space, with some buckets corresponding to the extreme values of these measures. Evaluation games are sampled such that there is an equal number of games per competitiveness-balance bucket, and per competitiveness-balance bucket an equal number of games across the different number of options and predicates in the game. We also remove trivial games (*i.e.* where κ(𝐠1)∈{0,1}𝜅subscript𝐠101\kappa({\mathbf{g}\_{1}})\in\{0,1\}italic\_κ ( bold\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ∈ { 0 , 1 }). The result is an evaluation set of games which is uniform across balance buckets, competitiveness buckets, number of options, and number of predicates. ##### Evaluation co-players Each evaluation task must include policies to act as the co-players of the task, leaving a single player slot available for evaluation of an agent in the task. For the purposes of this work, we use a collection of pretrained agents. These include a noop agent that always emits the noop action (corresponding to not moving) and a random agent that emits an action uniformly sampled from the whole action space. In addition, we use agents trained on simpler incarnations of the evaluation space, as well as sub-spaces of evaluation space (*e.g.* an agent trained only on single predicate games). These agents were generated during earlier phases of the research project. We combine the evaluation worlds, games, and co-players to get test and validation sets. We first generate the test set of evaluation tasks. Next the validation set of evaluation tasks is generated in an identical manner, however explicitly holding out all games and worlds within a certain distance from the test set (Section [A.3](#A1.SS3 "A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")) and likewise holding out all test set co-players except for the trivially generated noop and random policies. In addition, all hand-authored tasks (Section [4.3](#S4.SS3 "4.3 Hand-authored Task Set ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents")) are held out from all evaluation task sets. The test task set consists 1678 world-game pairs played with all 7 co-players for a total of 11746 tasks. The validation task set consists of 2900 world-game pairs played with a growing number of co-players: noop, random and an extra player per previous generation of training. ![Refer to caption](/html/2107.12808/assets/figures/Handauthored_examples.png) Figure 10: Five examples of some of the 42 tasks in the hand-authored evaluation task set. A full list of hand-authored evaluation tasks is given in Table [8](#A1.T8 "Table 8 ‣ A.10 Handauthored levels ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). ### 4.3 Hand-authored Task Set The evaluation set of tasks described previously covers a diverse subspace of XLand, however the automatic generation of these tasks can make interpretation of successful policies difficult – it can be hard to know what challenges an individual task poses. We created a hand-authored set of tasks which act as interpretable evaluation tasks. In addition, many of these hand-authored evaluation tasks are out-of-distribution or represent challenges that are extremely rare to be seen in a sample from the evaluation set, and thus further test ability of an agent to generalise. Examples of the 42 tasks in the hand-authored task set can be found in Figure [10](#S4.F10 "Figure 10 ‣ Evaluation co-players ‣ 4.2 Evaluation Task Set ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents") (full list is provided in Table [7](#A1.T7 "Table 7 ‣ A.10 Handauthored levels ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")), and include well known tasks such as *Capture the Flag*, *Hide and Seek*, and *King of the Hill* which have been projected into XLand space. Other examples include physical challenges such as *Stop Rolling* and *Tool Use*. The hand-authored task set is also held out from all training. 5 Learning Process ------------------- We now turn our attention to the learning process. We seek agents that are generally capable in the XLand space. As one of the proxies to this goal, we want agents that can zero-shot generalise to tasks from the test set, and use normalised percentiles computed on the test set as the performance metric to encapsulate this. Our training process consists of three main components: 1. 1. Deep RL to update the neural network of a single agent. Deep RL optimises an agent to maximise expected return across a distribution of training tasks given. 2. 2. Dynamic task generation with population based training to provide the distribution of training tasks for a population of agents. The task distributions are changed throughout training and are themselves optimised to improve the population’s normalised percentiles on the validation set. 3. 3. Generational training of populations of agents to chain together multiple learning processes with different objectives. Agents are trained with different learning objectives per generation, with each subsequent population bootstrapping behaviour off the previous generation of agents, to improve validation normalised percentiles with each subsequent generation. We will now describe these three components in more detail. ### 5.1 Deep Reinforcement Learning An agent playing on an XLand task 𝐱𝐱\mathbf{x}bold\_x takes in high-dimensional observations 𝐨tsubscript𝐨𝑡\mathbf{o}\_{t}bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT at each timestep, and produce a policy from which actions are sampled 𝐚t∼πtsimilar-tosubscript𝐚𝑡subscript𝜋𝑡\mathbf{a}\_{t}\sim\pi\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, allowing the agent to maximise the collected reward on the task. We use a neural network to parameterise the policy, and train this network using the V-MPO RL algorithm (Song et al., [2020](#bib.bib94)). Similarly to the the original V-MPO implementation, we use single-task PopArt normalisation of the value function (van Hasselt et al., [2016](#bib.bib104); Hessel et al., [2019](#bib.bib39)). At each weight update, the network parameterising π𝜋\piitalic\_π is updated in the direction to maximise the expected discounted return on the instantaneous task distribution 𝐕π(Pπ)subscript𝐕𝜋subscriptP𝜋\mathbf{V}\_{\pi}(\mathrm{P}\_{\pi})bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ). The per-timestep observation 𝐨tsubscript𝐨𝑡\mathbf{o}\_{t}bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT the neural network takes as input consists of an RGB image from the agent player’s point-of-view 𝐨tRGBsubscriptsuperscript𝐨RGB𝑡\mathbf{o}^{\text{RGB}}\_{t}bold\_o start\_POSTSUPERSCRIPT RGB end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, proprioception values corresponding to the forces relating to the agent’s player holding an object 𝐨tpriosubscriptsuperscript𝐨prio𝑡\mathbf{o}^{\text{prio}}\_{t}bold\_o start\_POSTSUPERSCRIPT prio end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, as well as the goal 𝐠𝐠\mathbf{g}bold\_g of the agent’s player in the task 𝐱𝐱\mathbf{x}bold\_x. A recurrent neural network processes this information to produce a value prediction 𝐯tsubscript𝐯𝑡\mathbf{v}\_{t}bold\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and policy πtsubscript𝜋𝑡\pi\_{t}italic\_π start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, from which a single action 𝐚tsubscript𝐚𝑡\mathbf{a}\_{t}bold\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is sampled. ##### Goal attention network The recurrent neural network incorporates an architecture that is tailored towards the structure of 𝐕\*superscript𝐕\mathbf{V}^{\*}bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT the value of an optimal policy for a given task. For simplicity let us write | | | | | --- | --- | --- | | | 𝐕\*(𝐠):=maxπ⁡𝐕π(𝐠)assignsuperscript𝐕𝐠subscript𝜋subscript𝐕𝜋𝐠\mathbf{V}^{\*}(\mathbf{g}):=\max\_{\pi}\mathbf{V}\_{\pi}(\mathbf{g})bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g ) := roman\_max start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT bold\_V start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_g ) | | to denote the value of the optimal policy when we hold the world, other goals, and co-players fixed. {thmrep}[Value Consistency] For a goal 𝐠:=⋁o=1k[⋀c=1noϕoc]assign𝐠superscriptsubscript𝑜1𝑘delimited-[]superscriptsubscript𝑐1subscript𝑛𝑜subscriptitalic-ϕ𝑜𝑐\mathbf{g}:=\bigvee\_{o=1}^{k}[\bigwedge\_{c=1}^{n\_{o}}\phi\_{oc}]bold\_g := ⋁ start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ ⋀ start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ] we have | | | | | --- | --- | --- | | | 𝐕\*(𝐠l)≤𝐕\*(𝐠)≤𝐕\*(𝐠u)superscript𝐕subscript𝐠𝑙superscript𝐕𝐠superscript𝐕subscript𝐠𝑢\mathbf{V^{\*}}(\mathbf{g}\_{l})\leq\mathbf{V^{\*}}(\mathbf{g})\leq\mathbf{V^{\*}}(\mathbf{g}\_{u})bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT ) ≤ bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g ) ≤ bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ) | | for 𝐠l:=⋁o=1k−1[⋀c=1noϕoc],𝐠u:=⋁o=1k[⋀c=1no′ϕoc]formulae-sequenceassignsubscript𝐠𝑙superscriptsubscript𝑜1𝑘1delimited-[]superscriptsubscript𝑐1subscript𝑛𝑜subscriptitalic-ϕ𝑜𝑐assignsubscript𝐠𝑢superscriptsubscript𝑜1𝑘delimited-[]superscriptsubscript𝑐1subscriptsuperscript𝑛′𝑜subscriptitalic-ϕ𝑜𝑐\mathbf{g}\_{l}:=\bigvee\_{o=1}^{k-1}\left[\bigwedge\_{c=1}^{n\_{o}}\phi\_{oc}\right],\mathbf{g}\_{u}:=\bigvee\_{o=1}^{k}\left[\bigwedge\_{c=1}^{n^{\prime}\_{o}}\phi\_{oc}\right]bold\_g start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT := ⋁ start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k - 1 end\_POSTSUPERSCRIPT [ ⋀ start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ] , bold\_g start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT := ⋁ start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ ⋀ start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ] where no′≥nosubscriptsuperscript𝑛′𝑜subscript𝑛𝑜n^{\prime}\_{o}\geq n\_{o}italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT ≥ italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT. ###### Proof 5.1. Since 𝐠lsubscript𝐠𝑙\mathbf{g}\_{l}bold\_g start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT differs from 𝐠𝐠\mathbf{g}bold\_g by simply missing the k𝑘kitalic\_kth option, this means that the corresponding reward function | | | | | --- | --- | --- | | | r𝐠l(𝐬)=maxo=1k−1⁡[minc=1no⁡ϕoc(𝐬)]≤maxo=1k⁡[minc=1no⁡ϕoc(𝐬)]=r𝐠(𝐬).subscript𝑟subscript𝐠𝑙𝐬superscriptsubscript𝑜1𝑘1superscriptsubscript𝑐1subscript𝑛𝑜subscriptitalic-ϕ𝑜𝑐𝐬superscriptsubscript𝑜1𝑘superscriptsubscript𝑐1subscript𝑛𝑜subscriptitalic-ϕ𝑜𝑐𝐬subscript𝑟𝐠𝐬r\_{\mathbf{g}\_{l}}(\mathbf{s})=\max\_{o=1}^{k-1}\left[\min\_{c=1}^{n\_{o}}\phi\_{oc}(\mathbf{s})\right]\leq\max\_{o=1}^{k}\left[\min\_{c=1}^{n\_{o}}\phi\_{oc}(\mathbf{s})\right]=r\_{\mathbf{g}}(\mathbf{s}).italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) = roman\_max start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k - 1 end\_POSTSUPERSCRIPT [ roman\_min start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ( bold\_s ) ] ≤ roman\_max start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ roman\_min start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ( bold\_s ) ] = italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) . | | Consequently 𝐕\*(𝐠l)≤𝐕\*(𝐠).superscript𝐕subscript𝐠𝑙superscript𝐕𝐠\mathbf{V^{\*}}(\mathbf{g}\_{l})\leq\mathbf{V^{\*}}(\mathbf{g}).bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g start\_POSTSUBSCRIPT italic\_l end\_POSTSUBSCRIPT ) ≤ bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g ) . Analogously, 𝐠usubscript𝐠𝑢\mathbf{g}\_{u}bold\_g start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT differs from 𝐠𝐠\mathbf{g}bold\_g by potentially having additional predicates in each options, this means that the corresponding reward function | | | | | --- | --- | --- | | | r𝐠(𝐬)=maxo=1k⁡[minc=1no⁡ϕoc(𝐬)]≤maxo=1k⁡[minc=1no′⁡ϕoc(𝐬)]=r𝐠u(𝐬).subscript𝑟𝐠𝐬superscriptsubscript𝑜1𝑘superscriptsubscript𝑐1subscript𝑛𝑜subscriptitalic-ϕ𝑜𝑐𝐬superscriptsubscript𝑜1𝑘superscriptsubscript𝑐1subscriptsuperscript𝑛′𝑜subscriptitalic-ϕ𝑜𝑐𝐬subscript𝑟subscript𝐠𝑢𝐬r\_{\mathbf{g}}(\mathbf{s})=\max\_{o=1}^{k}\left[\min\_{c=1}^{n\_{o}}\phi\_{oc}(\mathbf{s})\right]\leq\max\_{o=1}^{k}\left[\min\_{c=1}^{n^{\prime}\_{o}}\phi\_{oc}(\mathbf{s})\right]=r\_{\mathbf{g}\_{u}}(\mathbf{s}).italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) = roman\_max start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ roman\_min start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ( bold\_s ) ] ≤ roman\_max start\_POSTSUBSCRIPT italic\_o = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT [ roman\_min start\_POSTSUBSCRIPT italic\_c = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_o italic\_c end\_POSTSUBSCRIPT ( bold\_s ) ] = italic\_r start\_POSTSUBSCRIPT bold\_g start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_s ) . | | Consequently, 𝐕\*(𝐠)≤𝐕\*(𝐠u).superscript𝐕𝐠superscript𝐕subscript𝐠𝑢\mathbf{V^{\*}}(\mathbf{g})\leq\mathbf{V^{\*}}(\mathbf{g}\_{u}).bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g ) ≤ bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ) . This property says that for each game we can easily construct another game providing an upper or lower bound of the optimal value, by either selecting a subset of options or a superset of conjunctions. Therefore, with 𝐠:=𝐠^1∨⋯∨𝐠^noassign𝐠subscript^𝐠1⋯subscript^𝐠subscript𝑛𝑜\mathbf{g}:=\widehat{\mathbf{g}}\_{1}\vee\dots\vee\widehat{\mathbf{g}}\_{n\_{o}}bold\_g := over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∨ ⋯ ∨ over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT we have | | | | | --- | --- | --- | | | 𝐕\*(𝐠)≥maxi=1no⁡𝐕\*(𝐠^i).superscript𝐕𝐠superscriptsubscript𝑖1subscript𝑛𝑜superscript𝐕subscript^𝐠𝑖\mathbf{V}^{\*}(\mathbf{g})\geq\max\_{i=1}^{n\_{o}}\mathbf{V}^{\*}(\widehat{\mathbf{g}}\_{i}).bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g ) ≥ roman\_max start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) . | | By putting 𝐠^0:=𝐠assignsubscript^𝐠0𝐠\widehat{\mathbf{g}}\_{0}:=\mathbf{g}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT := bold\_g we can consequently say that | | | | | --- | --- | --- | | | 𝐕\*(𝐠)=maxi=0no⁡𝐕\*(𝐠^i),superscript𝐕𝐠superscriptsubscript𝑖0subscript𝑛𝑜superscript𝐕subscript^𝐠𝑖\mathbf{V}^{\*}(\mathbf{g})=\max\_{i=0}^{n\_{o}}\mathbf{V}^{\*}(\widehat{\mathbf{g}}\_{i}),bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_g ) = roman\_max start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_o end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , | | the optimal value for 𝐠𝐠\mathbf{g}bold\_g is the maximum of values of the subgames consisting of each of the options 𝐠^isubscript^𝐠𝑖\widehat{\mathbf{g}}\_{i}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and the full goal 𝐠𝐠\mathbf{g}bold\_g itself. ![Refer to caption](/html/2107.12808/assets/figures/GOAT.png) Figure 11: A schematic of the neural network used to parameterise the agent’s policy. The input observations 𝐨tsubscript𝐨𝑡\mathbf{o}\_{t}bold\_o start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT consist of RGB images and the proprioception, and the agent also receives its goal 𝐠𝐠\mathbf{g}bold\_g. The agent processes the observations through the torso and a recurrent core to produce 𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, which is used for the predicate predictor, producing 𝐩tsubscript𝐩𝑡\mathbf{p}\_{t}bold\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. The recurrent core output, the predicate predictor output, and the goal is passed to the GOAT module. The GOAT module (see Section [5.1](#S5.SS1 "5.1 Deep Reinforcement Learning ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents")) attends to a specific part of the recurrent representation based on the current goal of the agent, and performs logical analysis of the goal using *value consistency* (see Theorem [5.1](#S5.Ex40 "Goal attention network ‣ 5.1 Deep Reinforcement Learning ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents")). The goal embedding and predicate predictor architectures are provided in Figure [37](#A1.F37 "Figure 37 ‣ A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). Modules with the same names share weights (*i.e.* each value head, as well as each GOAT unit). We encode this property explicitly in the neural network architecture. At each timestamp, the agent produces an internal hidden state embedding using the history of its observations but no knowledge of the goal. Separately, the goal is embedded and used as a query for an attention mechanism that produces a goal-attention hidden state 𝐡^t[0]superscriptsubscript^𝐡𝑡delimited-[]0\widehat{\mathbf{h}}\_{t}^{[0]}over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 0 ] end\_POSTSUPERSCRIPT. In parallel, the agent produces analogous embeddings for each option 𝐡^t[i]superscriptsubscript^𝐡𝑡delimited-[]𝑖\widehat{\mathbf{h}}\_{t}^{[i]}over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT, and estimates the current value of each 𝐯^t[i]superscriptsubscript^𝐯𝑡delimited-[]𝑖\widehat{\mathbf{v}}\_{t}^{[i]}over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT. This asks the agent to predict *what would be its expected return if it was to focus on option i𝑖iitalic\_i until the end of the episode*, 𝐯^t[i]superscriptsubscript^𝐯𝑡delimited-[]𝑖\widehat{\mathbf{v}}\_{t}^{[i]}over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT. With the attention mechanism, the agent switches its hidden state to another option’s hidden state if and only if the value of said option was higher than its current estimate of the value of the whole goal. This way the agent is internally encouraged to be consistent with the *Value consistency* property of the game space. More specifically, 𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the hidden state of an LSTM (Hochreiter and Schmidhuber, [1997](#bib.bib40)) that takes as input the processed pixel and proprioception observations. We attach an atomic predicate state predictor to the output of the LSTM: this predictor is a simple multi-task binary classifier, outputting 𝐩tsubscript𝐩𝑡\mathbf{p}\_{t}bold\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT which predicts the dimensions of ϕ(𝐬t)italic-ϕsubscript𝐬𝑡\phi(\mathbf{s}\_{t})italic\_ϕ ( bold\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) relevant for 𝐠𝐠\mathbf{g}bold\_g, and is trained as an auxiliary classification loss (*i.e.* only shaping the internal representations, without explicitly affecting the policy (Jaderberg et al., [2017b](#bib.bib43))). The *goal attention (GOAT)* module then follows: | | | | | | --- | --- | --- | --- | | | 𝐆𝐎𝐀𝐓(𝐡t,𝐩t,𝐠)𝐆𝐎𝐀𝐓subscript𝐡𝑡subscript𝐩𝑡𝐠\displaystyle\mathbf{GOAT}(\mathbf{h}\_{t},\mathbf{p}\_{t},\mathbf{g})bold\_GOAT ( bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_g ) | :=∑i⟦α(𝐯^[i],{𝐯^[j]}j=0o)⟧𝐡^t[i]\displaystyle:=\sum\_{i}\left\llbracket\alpha(\widehat{\mathbf{v}}^{[i]},\{\widehat{\mathbf{v}}^{[j]}\}\_{j=0}^{o})\right\rrbracket\widehat{\mathbf{h}}\_{t}^{[i]}:= ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⟦ italic\_α ( over^ start\_ARG bold\_v end\_ARG start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT , { over^ start\_ARG bold\_v end\_ARG start\_POSTSUPERSCRIPT [ italic\_j ] end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_j = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_o end\_POSTSUPERSCRIPT ) ⟧ over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT | | | | 𝐯^t[i]subscriptsuperscript^𝐯delimited-[]𝑖𝑡\displaystyle\widehat{\mathbf{v}}^{[i]}\_{t}over^ start\_ARG bold\_v end\_ARG start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | :=f𝐕(𝐡^t[i])∀i=0oassignabsentsubscript𝑓𝐕superscriptsubscript^𝐡𝑡delimited-[]𝑖superscriptsubscriptfor-all𝑖0𝑜\displaystyle:=f\_{\mathbf{V}}(\widehat{\mathbf{h}}\_{t}^{[i]})\;\;\;\;\;\;\forall\_{i=0}^{o}:= italic\_f start\_POSTSUBSCRIPT bold\_V end\_POSTSUBSCRIPT ( over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT ) ∀ start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_o end\_POSTSUPERSCRIPT | | | | 𝐡^t[i]superscriptsubscript^𝐡𝑡delimited-[]𝑖\displaystyle\widehat{\mathbf{h}}\_{t}^{[i]}over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT | :=𝐆𝐎𝐀𝐓unit(𝐡t,𝐩t,𝐠^i)∀i=0oassignabsentsubscript𝐆𝐎𝐀𝐓unitsubscript𝐡𝑡subscript𝐩𝑡subscript^𝐠𝑖superscriptsubscriptfor-all𝑖0𝑜\displaystyle:=\mathbf{GOAT}\_{\mathrm{unit}}(\mathbf{h}\_{t},\mathbf{p}\_{t},\widehat{\mathbf{g}}\_{i})\;\;\;\;\;\;\forall\_{i=0}^{o}:= bold\_GOAT start\_POSTSUBSCRIPT roman\_unit end\_POSTSUBSCRIPT ( bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∀ start\_POSTSUBSCRIPT italic\_i = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_o end\_POSTSUPERSCRIPT | | | | 𝐆𝐎𝐀𝐓unit(𝐡,𝐩,𝐠)subscript𝐆𝐎𝐀𝐓unit𝐡𝐩𝐠\displaystyle\mathbf{GOAT}\_{\mathrm{unit}}(\mathbf{h},\mathbf{p},\mathbf{g})bold\_GOAT start\_POSTSUBSCRIPT roman\_unit end\_POSTSUBSCRIPT ( bold\_h , bold\_p , bold\_g ) | :=att(𝐡,[𝐩;f𝒢(𝐠)]),assignabsentatt𝐡𝐩subscript𝑓𝒢𝐠\displaystyle:=\mathrm{att}(\mathbf{h},[\mathbf{p};f\_{\mathcal{G}}(\mathbf{g})]),:= roman\_att ( bold\_h , [ bold\_p ; italic\_f start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( bold\_g ) ] ) , | | where att(⋅,⋅)att⋅⋅\mathrm{att}(\cdot,\cdot)roman\_att ( ⋅ , ⋅ ) is an attention module (Bahdanau et al., [2015](#bib.bib4)), ⟦⋅⟧delimited-⟦⟧⋅\left\llbracket\cdot\right\rrbracket⟦ ⋅ ⟧ denotes the stop gradient operation, meaning ∇x⟦x⟧=0\nabla\_{x}\left\llbracket x\right\rrbracket=0∇ start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT ⟦ italic\_x ⟧ = 0, and α(a,A)𝛼𝑎𝐴\alpha(a,A)italic\_α ( italic\_a , italic\_A ) is a weighting scheme, *e.g.*: αargmax(a,A)=1⇔a=max⁡{A}iffsubscript𝛼argmax𝑎𝐴1𝑎𝐴\alpha\_{\mathrm{argmax}}(a,A)=1\iff a=\max\{A\}italic\_α start\_POSTSUBSCRIPT roman\_argmax end\_POSTSUBSCRIPT ( italic\_a , italic\_A ) = 1 ⇔ italic\_a = roman\_max { italic\_A }, or αsoftmax,Z(a,A):=exp⁡(a/Z)∑b∈Aexp⁡(b/Z)assignsubscript𝛼softmaxZ𝑎𝐴𝑎𝑍subscript𝑏𝐴𝑏𝑍\alpha\_{\mathrm{softmax,Z}}(a,A):=\tfrac{\exp(a/Z)}{\sum\_{b\in A}\exp(b/Z)}italic\_α start\_POSTSUBSCRIPT roman\_softmax , roman\_Z end\_POSTSUBSCRIPT ( italic\_a , italic\_A ) := divide start\_ARG roman\_exp ( italic\_a / italic\_Z ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_b ∈ italic\_A end\_POSTSUBSCRIPT roman\_exp ( italic\_b / italic\_Z ) end\_ARG. Given this parameterisation, we add corresponding *consistency losses*: | | | | | --- | --- | --- | | | ℓt𝐕:=(⟦𝐯^t⟧−𝐯^t[0])2ℓt𝐡:=(⟦𝐡^t⟧−𝐡^t[0])2,\displaystyle\ell^{\mathbf{V}}\_{t}:=\left(\left\llbracket\widehat{\mathbf{v}}\_{t}\right\rrbracket-\widehat{\mathbf{v}}\_{t}^{[0]}\right)^{2}\;\;\;\;\;\;\;\ell^{\mathbf{h}}\_{t}:=\left(\left\llbracket\widehat{\mathbf{h}}\_{t}\right\rrbracket-\widehat{\mathbf{h}}\_{t}^{[0]}\right)^{2},roman\_ℓ start\_POSTSUPERSCRIPT bold\_V end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := ( ⟦ over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ⟧ - over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 0 ] end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT roman\_ℓ start\_POSTSUPERSCRIPT bold\_h end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := ( ⟦ over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ⟧ - over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 0 ] end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT , | | where 𝐡^t:=𝐆𝐎𝐀𝐓(𝐡t,𝐩t,𝐠)assignsubscript^𝐡𝑡𝐆𝐎𝐀𝐓subscript𝐡𝑡subscript𝐩𝑡𝐠\widehat{\mathbf{h}}\_{t}:=\mathbf{GOAT}(\mathbf{h}\_{t},\mathbf{p}\_{t},\mathbf{g})over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := bold\_GOAT ( bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_p start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , bold\_g ), 𝐯^t:=f𝐕(𝐡^t)assignsubscript^𝐯𝑡subscript𝑓𝐕subscript^𝐡𝑡\widehat{\mathbf{v}}\_{t}:=f\_{\mathbf{V}}(\widehat{\mathbf{h}}\_{t})over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := italic\_f start\_POSTSUBSCRIPT bold\_V end\_POSTSUBSCRIPT ( over^ start\_ARG bold\_h end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), and f𝔊subscript𝑓𝔊f\_{\mathfrak{G}}italic\_f start\_POSTSUBSCRIPT fraktur\_G end\_POSTSUBSCRIPT is the goal embedding function (see Figure [37](#A1.F37 "Figure 37 ‣ A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")). These losses encourage the value predicted for the full goal 𝐠𝐠\mathbf{g}bold\_g to not be smaller than the value of any of the options 𝐠^isubscript^𝐠𝑖\widehat{\mathbf{g}}\_{i}over^ start\_ARG bold\_g end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. A schematic view of this process is provided in Figure [11](#S5.F11 "Figure 11 ‣ Goal attention network ‣ 5.1 Deep Reinforcement Learning ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents") with the details of the goal embedding f𝒢subscript𝑓𝒢f\_{\mathcal{G}}italic\_f start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT and atomic predicate predictions provided in Figure [37](#A1.F37 "Figure 37 ‣ A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). Note, that these are all internal value functions predictions that do not use any privileged information. We observed that faster learning can be achieved if the value function 𝐯𝐯\mathbf{v}bold\_v used for RL itself does get access to extra information (Vinyals et al., [2019](#bib.bib107)) in the form of the reward from the last step rt−1subscript𝑟𝑡1r\_{t-1}italic\_r start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT. We add a simple L2 loss to align these two value heads in a co-distillation manner (Zhang et al., [2018](#bib.bib113)): | | | | | --- | --- | --- | | | ℓalign:=‖𝐯t−𝐯^t‖2.assignsuperscriptℓalignsuperscriptnormsubscript𝐯𝑡subscript^𝐯𝑡2\ell^{\mathrm{align}}:=\|\mathbf{v}\_{t}-\widehat{\mathbf{v}}\_{t}\|^{2}.roman\_ℓ start\_POSTSUPERSCRIPT roman\_align end\_POSTSUPERSCRIPT := ∥ bold\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . | | We do not stop gradients through 𝐯𝐯\mathbf{v}bold\_v meaning that the privileged information value head 𝐯𝐯\mathbf{v}bold\_v is penalised for expressing quantities that the internal 𝐯^^𝐯\widehat{\mathbf{v}}over^ start\_ARG bold\_v end\_ARG cannot model as well. ### 5.2 Dynamic Task Generation Due to the vastness of task space, for any given agent, many tasks will either be too easy or too hard to generate good training signal. To tackle this, we allow the train task distribution to change throughout the learning process in response to the policy of the agent itself. The agent’s neural network is trained with RL on the instantaneous training task distribution Pπ(ℵ)subscriptP𝜋ℵ\mathrm{P}\_{\pi}(\aleph)roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( roman\_ℵ ). We operationalise Pπ(ℵ)subscriptP𝜋ℵ\mathrm{P}\_{\pi}(\aleph)roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( roman\_ℵ ) by using a filtering of a proposal distribution using a simple set of tests evaluating tasks usefulness for the current stage of learning. Proposal train tasks are generated in a similar manner to the evaluation validation task set: worlds, games, and co-players are generated as described in Section [4.2](#S4.SS2 "4.2 Evaluation Task Set ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents") ensuring no collisions with the validation and test sets (Section [A.3](#A1.SS3 "A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")). We establish a task’s usefulness by comparing the performance of the agent to the performance of a control policy πcontsubscript𝜋cont\pi\_{\mathrm{cont}}italic\_π start\_POSTSUBSCRIPT roman\_cont end\_POSTSUBSCRIPT. The intuition of using a control policy is that the agent will only train on a task if the agent’s returns are significantly better than those of the control policy. This guarantees that the agent is performing meaningful actions in the task that affect the return. In practice, we set the control policy to be a uniform random action policy. However, an interesting alternative would be to set the policy to be the agent’s past policy – this would let us determine whether the agent’s policy has recently improved or worsened on this task. A proposal task is accepted (used for training) if and only if the following three criteria are met: 1. 1. The agent has a low probability of scoring high on a given task | | | | | --- | --- | --- | | | Pr[Rπ(𝐱)>ms]<msolved.Prdelimited-[]subscript𝑅𝜋𝐱subscript𝑚𝑠subscript𝑚solved\mathrm{Pr}[R\_{\pi}(\mathbf{x})>m\_{s}]<m\_{\mathrm{solved}}.roman\_Pr [ italic\_R start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) > italic\_m start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ] < italic\_m start\_POSTSUBSCRIPT roman\_solved end\_POSTSUBSCRIPT . | | 2. 2. The agent has a high probability of performing better than the control policy | | | | | --- | --- | --- | | | Pr[Rπ(𝐱)>Rπcont(𝐱)+m>]>m>cont.Prdelimited-[]subscript𝑅𝜋𝐱subscript𝑅subscript𝜋cont𝐱subscript𝑚subscript𝑚absentcont\mathrm{Pr}[R\_{\pi}(\mathbf{x})>R\_{\pi\_{\mathrm{cont}}}(\mathbf{x})+m\_{>}]>m\_{>\mathrm{cont}}.roman\_Pr [ italic\_R start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( bold\_x ) > italic\_R start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT roman\_cont end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_x ) + italic\_m start\_POSTSUBSCRIPT > end\_POSTSUBSCRIPT ] > italic\_m start\_POSTSUBSCRIPT > roman\_cont end\_POSTSUBSCRIPT . | | 3. 3. The control policy is not performing well | | | | | --- | --- | --- | | | 𝐕πcont(𝐱)<mcont.subscript𝐕subscript𝜋cont𝐱subscript𝑚cont\mathbf{V}\_{\pi\_{\mathrm{cont}}}(\mathbf{x})<m\_{\mathrm{cont}}.bold\_V start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT roman\_cont end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( bold\_x ) < italic\_m start\_POSTSUBSCRIPT roman\_cont end\_POSTSUBSCRIPT . | | At a high level, the filtering of proposal tasks gives a mechanism for removing tasks that are too-easy (criterion 1), tasks that are too-hard (criterion 2), and tasks in which the control policy is sufficient to achieve a satisfactory score (criterion 3), based on the agent’s current behaviour at each point in training. All the above parameters 𝐦={m>,ms,mcont,m>cont,msolved}𝐦subscript𝑚subscript𝑚ssubscript𝑚contsubscript𝑚absentcontsubscript𝑚solved\mathbf{m}=\{m\_{>},m\_{\text{s}},m\_{\text{cont}},m\_{>\text{cont}},m\_{\text{solved}}\}bold\_m = { italic\_m start\_POSTSUBSCRIPT > end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT s end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT cont end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT > cont end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT solved end\_POSTSUBSCRIPT } form agent-specific hyperparameters that define Pπ(ℵ)subscriptP𝜋ℵ\mathrm{P}\_{\pi}(\aleph)roman\_P start\_POSTSUBSCRIPT italic\_π end\_POSTSUBSCRIPT ( roman\_ℵ ). We estimate the criteria using Monte Carlo with 10 episode samples for each policy involved. For example, a control policy return threshold mcont=5subscript𝑚cont5m\_{\text{cont}}=5italic\_m start\_POSTSUBSCRIPT cont end\_POSTSUBSCRIPT = 5 would disallow any training tasks where a control policy is able to get a return of at least 5 (the reward is on a scale of 0 to 900). When using a uniform-random policy over actions as the control policy, this could be used to ensure the training task distribution doesn’t contain tasks that are very easy to get reward. The combination, for example, of m>=2subscript𝑚2m\_{>}=2italic\_m start\_POSTSUBSCRIPT > end\_POSTSUBSCRIPT = 2 and m>cont=0.9subscript𝑚absentcont0.9m\_{>\text{cont}}=0.9italic\_m start\_POSTSUBSCRIPT > cont end\_POSTSUBSCRIPT = 0.9 would only allow training tasks where the agent achieves a return in all ten episode samples of at least 2 reward more than the return achieved by the control policy – this could ensure that the agent only trains on tasks where its behaviour is already better than that of the control policy. As a final example, the combination of ms=450subscript𝑚s450m\_{\text{s}}=450italic\_m start\_POSTSUBSCRIPT s end\_POSTSUBSCRIPT = 450 and msolved=0.1subscript𝑚solved0.1m\_{\text{solved}}=0.1italic\_m start\_POSTSUBSCRIPT solved end\_POSTSUBSCRIPT = 0.1 would disallow training on any task where the agent is able to achieve more than 450 reward on any of its episode samples – this could filter out tasks where the agent is already performing well. Whilst this filtering mechanism provides a way to supply the agent with a dynamic training task distribution, the filtering criterion itself may benefit from being dynamic. What is considered too-hard or too-easy at the beginning of training may encourage early learning, but cause saturation or stalling of learning later in training. Due to the vastness of the XLand task space we seek learning processes that do not saturate, but rather dynamically shift to ensure the agent never stops learning. To address this, we incorporate population based training (PBT) (Jaderberg et al., [2017a](#bib.bib42)) which provides a mechanism to dynamically change hyperparameters of the learning process (Jaderberg et al., [2019](#bib.bib44)). Rather than training a single agent, we train a population of agents, each agent training on its own task distribution Pπk(ℵ)subscriptPsubscript𝜋𝑘ℵ\mathrm{P}\_{\pi\_{k}}(\aleph)roman\_P start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( roman\_ℵ ) that is controlled by its own hyperparameters 𝐦ksubscript𝐦𝑘\mathbf{m}\_{k}bold\_m start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. Additionally, the learning rate and V-MPO hyperparameter ϵαsubscriptitalic-ϵ𝛼\epsilon\_{\alpha}italic\_ϵ start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT are added to the set of hyperparameters modified by PBT. PBT requires a fitness function to compare two agents and propagate the preferred agent. We use the normalised percentiles on the validation set. Periodically during training, agents are compared, and only if an agent Pareto dominates another agent in normalised score across percentiles it undergoes evolution – the dominant agent’s weights are copied, its instantaneous task distribution copied, and the hyperparameters copied and mutated, taking the place in training of the non-dominant agent. More details can be found in Section [A.7](#A1.SS7 "A.7 Population Based Training ‣ External value function matching. ‣ A.6.1 Auxiliary losses ‣ A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). This process constantly modifies the dynamic task generation process and agent population to drive iterative improvement in normalised percentiles. ### 5.3 Generational training With this combination of deep RL and dynamic task distributions we hope to provide a training process to continually improve agents in terms of their normalised percentiles as measured on the validation task set. However, in practice, the limitations of RL and neural network training dynamics still pose a challenge in training agents on the XLand task space from scratch. It has been observed that higher performance and faster training can be achieved in deep RL by first training an agent, then subsequently training a new agent on the identical task whilst performing policy distillation from the first agent (Furlanello et al., [2018](#bib.bib28); Schmitt et al., [2018](#bib.bib83); Czarnecki et al., [2019](#bib.bib19)). We employ this technique multiple times on populations of agents: a population of agents is trained, then a new generation of agents is trained distilling from the best agent of the previous generation’s population, with this process repeated multiple times. Each generation bootstraps its behaviour from the previous generation. Furthermore, these previous generations also give us an opportunity to increase our pool of co-player policies and increase the diversity of our training experience, similarly to the AlphaStar league (Vinyals et al., [2019](#bib.bib107)). At each generation, our training procedure includes the best player from each previous generation in this pool of players. A final advantage of generational training of populations is that the learning objectives and agent architecture can vary generation-to-generation. We take advantage of this by using self reward-play: an RL objective which encourages exploration. In our training methodology, self reward-play is utilised for the first few generations followed by the regular RL objective in the later generations. ##### Self reward-play One of the central desires of a generally capable agent is that the agent should catastrophically fail on as few tasks as possible. To target this objective we seek agents that minimise the smallest non-zero normalised percentile – to obtain at least one timestep of reward in as many tasks as possible, the problem of exploration. We define *participation* as the percentage of tasks the agent obtains a non-zero reward in. To aid learning participation, we present challenges to the agent that it is capable of satisfying by asking the agent to revert a changes in the environment that the agent itself previously created. Self reward-play rewards the agent for satisfying a goal 𝐠𝐠\mathbf{g}bold\_g, and after succeeding the agent is rewarded for fulfilling 𝚗𝚘𝚝(𝐠)𝚗𝚘𝚝𝐠\texttt{not}(\mathbf{g})not ( bold\_g ) without resetting the environment, with this flip in goal repeating after each satisfaction. This can be seen as an agent playing in a self-play competitive manner against itself, where one player must satisfy 𝐠𝐠\mathbf{g}bold\_g and the other player must satisfy 𝚗𝚘𝚝(𝐠)𝚗𝚘𝚝𝐠\texttt{not}(\mathbf{g})not ( bold\_g ), however the players act sequentially, and are played by the same agent. In practice, we implement this by using the reward rtsrp:=|rt−rt−1|assignsubscriptsuperscript𝑟srp𝑡subscript𝑟𝑡subscript𝑟𝑡1r^{\mathrm{srp}}\_{t}:=|r\_{t}-r\_{t-1}|italic\_r start\_POSTSUPERSCRIPT roman\_srp end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := | italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_r start\_POSTSUBSCRIPT italic\_t - 1 end\_POSTSUBSCRIPT | and setting the discount γt=0subscript𝛾𝑡0\gamma\_{t}=0italic\_γ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 0 if rtsrp>0subscriptsuperscript𝑟srp𝑡0r^{\mathrm{srp}}\_{t}>0italic\_r start\_POSTSUPERSCRIPT roman\_srp end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT > 0 (which rewards the agent for minimising the time until the next goal flip). Empirically, we find that optimising for self reward-play drastically improves exploration. The agent is encouraged to interact with the world and to change its reward state, after which it must change the state back again, and so on. In comparison, when optimising the discounted sum of environment reward, changing the environment yields the risk of changing the (unobserved) environment reward from 1 to 0 which discourages the agent from interacting with the environment. As a result, agents that optimise with self reward-play achieve significantly higher participation in the same amount of training time (see Section [6.2.2](#S6.SS2.SSS2 "6.2.2 Ablation Studies ‣ 6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")). However, by construction, self reward-play does not optimise agents to be competent (*i.e.* whilst the smallest non-zero normalised score percentile is minimised, the normalised percentiles remain low). We discuss in detail how self reward-play is leveraged in Section [6.2](#S6.SS2 "6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). ##### Iterative normalised percentiles As discussed in Section [4.2](#S4.SS2 "4.2 Evaluation Task Set ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents"), the test set contains a fixed set of co-player policies (used also to evaluate against). However, the validation set does not contain these, but only the trivially generated noop-action and random-action policies. For evaluation, co-player policies are required to play validation tasks with, and the normaliser score used by the normalised percentile metric also uses this fixed set of co-player policies. The generational training process allows us to start only with the trivially generated noop-action and random-action policies and to iteratively refine the validation normalised percentiles metric: each generation creates agents which are added to the validation set and used to update the normalised percentile metric, with the next generation incorporating the previous generation’s policies in its training, with this process repeating, iteratively refining the normalised percentile metric and expanding the set of co-player policies. This means that the normalised percentiles metric on the validation set used to guide training changes each generation as more policies are added to the validation co-player set. Note that for all results reported in Section [6](#S6 "6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"), we report the normalised percentiles on the test set which is fixed, with the same fixed set of co-player policies, for all generations. ### 5.4 Combined Learning Process These three components of the training process – deep reinforcement learning, dynamic task generation, and generational training – are combined to create a single learning process. The three pieces are hierarchically related. On the smallest wall-clock timescale (seconds), deep RL provides weight updates for the agents’ neural networks, iteratively improving their performance on their task distributions. On a larger timescale (hours), dynamic task generation and population based training modulate the agents’ task distributions to iteratively improve the Pareto front of validation normalised percentile scores. Finally, on the largest timescale (days), generational training iteratively improves population performance by bootstrapping off previous generations, whilst also iteratively updating the validation normalised percentile metric itself. ![Refer to caption](/html/2107.12808/assets/figures/fig2_v1-01.png) Figure 12: The combined learning process. (Top) Generations of agents are trained, composed of populations of agents where the best performing agents become distillation teachers of the next generation as well as co-players to train against. (Middle) Inside each population, agents are trained with dynamic task generation that continuously adapts the distribution of training tasks Pπk(ℵ)subscriptPsubscript𝜋𝑘ℵ\mathrm{P}\_{\pi\_{k}}(\aleph)roman\_P start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( roman\_ℵ ) for each agent πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, and population based training (PBT) modulates the generation process by trying to Pareto dominate other agents with respect to the normalised percentiles metric. (Bottom) Each agent trains with deep reinforcement learning and consists of a neural network producing the policy π𝜋\piitalic\_π and value function 𝐯𝐯\mathbf{v}bold\_v. ![Refer to caption](/html/2107.12808/assets/figures/generations.png) Figure 13: Generations of performance as measured on the held out test task set. The first two generations focus on the maximisation of participation using the self reward-play RL objective (Section [5.3](#S5.SS3 "5.3 Generational training ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents")). In between generations, the best agent wrt. the objective is selected and used as a teacher and additional co-player to play against in further generations. Generations 3-5 focus on the improvement of normalised percentiles, and use the raw reward for the RL algorithm. The dashed line in each plot corresponds to the performance of the teacher from the previous generation. The co-players are the set of policies that populate the co-players in these multiplayer tasks, with this set initialised to just the trivially created noop-action and random-action agents (white and grey circles). ![Refer to caption](/html/2107.12808/assets/figures/blogfig3.png) Figure 14: (Top) On the left we see the *learning surface*, showing the progress of a generation 5 agent through time with respect to each of the normalised percentiles. The surface shows the normalised score (height) for each percentile (depth) through training (x-axis). Therefore, the flat bottom of the surface (zero height) is the part of the space where the agent is not participating. On the right, we see an orthogonal projection onto the surface at the end of training. (Bottom) We highlight the performance on 6 hand-authored tasks at three points in training, showing how improvements in the normalised percentiles correspond to improvement in these hand-authored tasks. From the opposite perspective, the overall system continuously creates generations of agents seeking to improve the validation normalised percentile metric – to gradually improve the coverage and competence on tasks. In order to do so, a generation’s population is changing the distribution of training tasks for each agent such that the agents keep improving the Pareto front of validation normalised percentile scores. The agents themselves are updating their neural network weights with reinforcement learning based on the stream of experience they generate from their training task distributions, gradually improving their performance on this shifting distribution. The whole process is summarised in Figure [12](#S5.F12 "Figure 12 ‣ 5.4 Combined Learning Process ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents"). The iterative nature of the combined learning system, with the absence of a bounded metric being optimised, leads to a potentially open-ended learning process for agents, limited only by the expressivity of the environment space and the agent’s neural network. 6 Results and Analysis ----------------------- In this section, we show the results of training agents with the learning process introduced in Section [5](#S5 "5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents"), with the specific experimental setup described in Section [6.1](#S6.SS1 "6.1 Experimental Setup ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). The learning dynamics are explored in Section [6.2](#S6.SS2 "6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") with respect to the evaluation metric defined in Section [4](#S4 "4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents"). In Section [6.3](#S6.SS3 "6.3 Performance Analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"), we analyse the zero-shot generalisation performance of the trained agent across the test set. Section [6.4](#S6.SS4 "6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") delves into some emergent agent behaviour that is observed on hand-authored probe tasks. Moving beyond zero-shot behaviour, in Section [6.5](#S6.SS5 "6.5 Finetuning for Transfer ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we show the results of finetuning the trained agents for wider transfer. Finally, in Section [6.6](#S6.SS6 "6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we analyse the representations formed by the agent’s neural network. All the results reported in this section are computed on tasks that were held-out of training. ### 6.1 Experimental Setup More details on the architecture, hyperparameters, other elements of the experimental setup are provided in Section [A.4](#A1.SS4 "A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"), [A.5](#A1.SS5 "A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"), [A.6](#A1.SS6 "A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"), and [A.7](#A1.SS7 "A.7 Population Based Training ‣ External value function matching. ‣ A.6.1 Auxiliary losses ‣ A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). Each agent is trained using 8 TPUv3s and consumes approximately 50,000 agent steps (observations) per second. ### 6.2 Agent Training We trained five generations of agents, varying the learning setup with each generation. The results of this process is shown in Figure [13](#S5.F13 "Figure 13 ‣ 5.4 Combined Learning Process ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents"). The learning process per generation is described below. ![Refer to caption](/html/2107.12808/assets/figures/training_dists.png) Figure 15: Evolution of the training distribution of tasks due to dynamic task generation throughout 5 generations of agents (showing values from one agent per population only). We characterise the the training distribution by five measures (from the top): competitiveness (a property of games), number of options (a property of games), opponent strength (how performant the co-player in the task is), shortest paths entropy (a property of the worlds), initial atom changes (a property of the tasks, how many atomic predicates must be changed to satisfy an option). These change a lot throughout training, for example the strength of the opponents grows over time, generation 3 focuses more on worlds with larger shortest path entropy, and later generations focus on more competitive games. The co-player set of policies were initialised with a noop and a random policy. We used the generational mechanisms described in Section [5.3](#S5.SS3 "5.3 Generational training ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents"). At the end of each generation, we selected the best agent that was produced throughout the generation. This agent was then used in three ways by subsequent generations: 1) as a policy to use for distillation in the next generation, 2) as an additional policy in the co-player set of policies, and 3) as an additional player as part of the computation of the validation normalised percentile metric. We varied the learning setup in the following way across generations. In the first two generations, the agent was trained with self reward-play to encourage exploration. In these generations, the fitness used for PBT was the average participation as measured on the validation task set. Subsequent generations were trained without self reward-play and used Pareto dominance over 10th, 20th and 50th percentiles of normalised score on the validation task set as PBT fitness. When selecting the best agent for the next generation, the agent with the highest participation was chosen in the first two generations, and the agent with the highest 10th percentile normalised score in subsequent generations. After two generations of training, we obtained an agent trained with self reward-play with a high test participation (91%) but low test 10th percentile and 50th percentile normalised scores – 23% and 79% respectively. The generation 3 agents quickly outperformed these scores as they did not use self reward-play and instead maximised true reward. Our final agent in generation 5 reached 95% participation (however it participates in 100% of tasks that humans can, see details in Section [6.3.1](#S6.SS3.SSS1 "6.3.1 Coverage ‣ 6.3 Performance Analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")), 82% 10th percentile, 112% 50th percentile (median normalised score) on the test set, and 585 average return on the hand-authored task set (which is provably at least 65% of the optimal policy value), Figure [13](#S5.F13 "Figure 13 ‣ 5.4 Combined Learning Process ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents") (right). The learning surface for the final 5th generation is shown in Figure [14](#S5.F14 "Figure 14 ‣ 5.4 Combined Learning Process ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents"). #### 6.2.1 Dynamic Task Generation Evolution Figure [15](#S6.F15 "Figure 15 ‣ 6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") shows how various properties of our tasks change throughout training as a result of the dynamic task generation (DTG). ![Refer to caption](/html/2107.12808/assets/figures/gen1_ablations.png) (a) Participation as we ablate away dynamic task generation, the variable discount used in self reward-play, and self reward-play altogether. ![Refer to caption](/html/2107.12808/assets/figures/gen3_ablations.png) (b) The 10th percentile normalised score as we ablate away PBT, the GOAT architecture, and dynamic task generation. ![Refer to caption](/html/2107.12808/assets/figures/gbt_ablations.png) (c) The 10th percentile normalised score of the fourth and fifth generation as we keep the fourth generation running. Figure 16: Ablations of the training methods. In all plots, the curve designates the median agent performance in the population while the shaded area shows the spread between the best and the worst agent. We can see that for generation 3 and onward, DTG significantly increases the average strength of the co-players that the agent trains against. Similarly, there is an increase in the competitiveness as well as a decrease in the number of options of games presented to the agent for training. The composition of these 3 factors – stronger opponents, more competitive scenarios, and less options, creates a training distribution of hard problems (since agents are forced to compete with capable opponents). Similarly, the number of initial atomic predicate changes needed gradually increases, meaning that agents are increasingly placed in scenarios where multiple predicate states must be changed from their initial state to obtain reward. All these changes are driven by the agent’s performance; there is no direct control given to the agent to focus on any of the above properties, and thus these dynamics are purely emergent. #### 6.2.2 Ablation Studies Our ablation studies evaluate the impact of different aspects of our training methodology. ##### Early generations: self reward-play and dynamic task generation. As discussed in Section [5.3](#S5.SS3 "5.3 Generational training ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents"), early training in our environment is difficult. We use self reward-play to encourage the agent to explore changing the environment state, and dynamic task generation to avoid training on tasks that are initially too hard for the agent and would not provide any useful training signal. In this ablation, we trained multiple agents from scratch with a diverse pool of co-player policies. We show the participation of the different trained agents in Figure [15(a)](#S6.F15.sf1 "15(a) ‣ Figure 16 ‣ 6.2.1 Dynamic Task Generation Evolution ‣ 6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). Our full method, which used both dynamic task generation and self reward-play, reached a participation of 84% after 16 billion steps. We see that removing in turn dynamic task generation, the use of zero discounts on step changes (part of our self reward-play procedure), and self reward-play resulted in significant reductions in performance. When none of these methods are used, the agent fails to learn any meaningful policy. ##### Later generations: Population based training, the GOAT architecture and dynamic task generation. In our next ablation, we consider a setup similar to the third generation in our main experiments. The agents were not trained with self reward-play, but during the first 4 billion steps have a distillation loss towards the teacher policy of an agent that was trained with self reward-play. The agents were trained with a diverse pool of co-player policies. The results are shown in Figure [15(b)](#S6.F15.sf2 "15(b) ‣ Figure 16 ‣ 6.2.1 Dynamic Task Generation Evolution ‣ 6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). We trained each agent for 20 billion steps. Similarly to our main experiments, our full method uses PBT, the GOAT architecture and dynamic task generation. Our first ablation removes PBT from our method, replacing it by a simple sweep across 8 agents, which leads to a ∼20%similar-toabsentpercent20\sim 20\%∼ 20 % reduction in performance of the best agent. Additionally removing the GOAT architecture from our method and replacing it with a simpler architecture similar to the one used in Hessel et al. ([2019](#bib.bib39)) yields another ∼30%similar-toabsentpercent30\sim 30\%∼ 30 % reduction in performance. Finally, removing dynamic task generation from our method whilst keeping other aspects constant leads to a ∼65%similar-toabsentpercent65\sim 65\%∼ 65 % reduction in performance. ##### Generation based training. In our final ablation, we consider the benefits of generation based training. We kept the fourth generation of main experiments from Section [6.2](#S6.SS2 "6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") running in order to compare its performance to the fifth generation. The results are shown in Figure [15(c)](#S6.F15.sf3 "15(c) ‣ Figure 16 ‣ 6.2.1 Dynamic Task Generation Evolution ‣ 6.2 Agent Training ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). We offset the fifth generation’s curve to the point the best agent from the fourth generation was selected. We can see that as training progresses the fifth generation outperforms the previous generation (both in terms of comparing best agents from corresponding populations, as well as comparing the averages), even when generation 4 was trained for the same amount of time. ### 6.3 Performance Analysis Due to the vastness of task space, with unknown maximum scores, there is no single notion of performance to report. Consequently, we rely on relative performance analysis and other qualitative notions of progress described below. ![Refer to caption](/html/2107.12808/assets/figures/Validation_coverage.png) Figure 17: A visualisation of the test set of tasks, with the corresponding agent performance. The red colour corresponds to a low normalised score and green to a high one. We identify four sources of games the agent scores 0 reward on (listed on the right): 1) tasks that require the agent to put the opponent on a specific floor (marked as triangles in the galaxy); 2) tasks that require the agent to make the co-player hold an object (marked as circles in the galaxy); 3) a single task (in red in the galaxy) which is impossible due to a very rare physics simulation bug; 4) a single task (in orange in the galaxy) that requires the agent to put the co-player on a given floor by a composition of two predicates. After removing these four types of tasks, which cannot be solved even by a human, our agents participate in *every test task*. #### 6.3.1 Coverage First, we focus our attention on answering the question *are there any test tasks, where the agent never reaches a rewarding state?* We identify that there are indeed a few percent of this space where none of the agents ever score any points. Further investigation shows that all these failed tasks involve impossible challenges, requiring an agent to *make the co-player hold something* (which, without the cooperation of the opponent is impossible) or to *place the co-player on a specific floor* (which, can also be impossible to achieve given the physical simulation of the game). Furthermore, we identify a single task, which, due to a very rare physics bug is impossible to solve because the object of interest spawns outside the reachable region. Figure [17](#S6.F17 "Figure 17 ‣ 6.3 Performance Analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") shows these games in the XLand galaxy. After removing the impossible tasks listed above our agent participates in every single test task, thus suggesting they are indeed widely capable. #### 6.3.2 Relative Performance We investigate various properties of the games, and how they translate to the relative improvement of our agents (using test normalised scores to measure this quantity). ![Refer to caption](/html/2107.12808/assets/figures/performance.png) Figure 18: (Left) Box plots showing the distribution of normalised scores for the final agent across various types of validation tasks. Whiskers denote the minimum and maximum value, the notch denotes the median, and the box area is between the 25th and 75th percentiles. The white square denotes average performance. (Right) Normalised test percentiles for the corresponding subsets of tasks. In Figure [18](#S6.F18 "Figure 18 ‣ 6.3.2 Relative Performance ‣ 6.3 Performance Analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we can see that the overall normalised score of our agent is higher on games which are more competitive, suggesting that it is in these challenging competitive scenarios our proposed learning process brings the biggest improvement relative to the pretrained evaluation policies in the test set. Similarly, high normalised score is correlated with a large number of goal predicates (and thus a need to reason about many relations at the same time) as well as high initial atom changes (the number of relations that need to be changed, before an agent can get to a rewarding state). We also observe the biggest improvements with fewer options – games where there is just one option are much harder on a purely navigational level, as an agent cannot *choose* what to do, but rather is forced to satisfy a single option. Finally, we also see a big improvement relative to the evaluation policies when the agent is tasked with goals involving object-object interactions, such as *make the yellow sphere be near the purple pyramid*, as opposed to tasks related to the players themselves, e.g. *hold a purple sphere*. Overall, we see a general trend of agents showing the greatest improvements in the most challenging parts of our game space. ### 6.4 General Capabilities We now provide an overview of some of the general capabilities of the agent observed, allowing them to participate in a variety of tasks, execute various behaviours, and show satisfactory handling of new, unexpected situations. Whilst the current instantiation of XLand is extremely vast, one can easily hand-author tasks that could only extremely rarely, or cannot at all, be generated during training due to the constraints of our training task generation process. For example we can place agents in worlds that lack ramps to challenge their ability to navigate, we can make them face unseen co-players, and we can execute interventions mid-episode. These probe tasks allow us to better understand and clarify the limits of generality of our agents. #### 6.4.1 Hand-authored tasks We now consider the qualitative behavioural properties of our agents at different points throughout training on hand-authored tasks (see Figure [10](#S4.F10 "Figure 10 ‣ Evaluation co-players ‣ 4.2 Evaluation Task Set ‣ 4 Goal and Metric ‣ Open-Ended Learning Leads to Generally Capable Agents") for some some examples and Table [7](#A1.T7 "Table 7 ‣ A.10 Handauthored levels ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents") & [8](#A1.T8 "Table 8 ‣ A.10 Handauthored levels ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents") for a list of all). We compare two agents on a selection of the hand-authored task set: the final generation 4 (πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) agent and the final generation 5 (πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) agent. A selection of videos of the generation 5 (πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) agent can be found in the [supplementary results video here](https://youtu.be/lTmL7jwFfdw). ##### Capture the flag. In this two-player task, the agents’ goal is to capture the cube in the opponent’s base and bring it back to their own base. An agent gets a reward if the opponent’s cube touches the floor of their own base while their own cube also touches the floor of their own base, with the opponent having an equivalent goal with respect to its base floor. Both agents are able to navigate to their opponent’s base to capture their cube. However, πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT often finds it difficult to find the way back to its own base. Furthermore, it often gets tagged by the opponent, making it respawn at its initial spawn location. πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT on the other hand shows better navigational skills and usually finds its way back to its base after capturing the cube. ##### Hide and seek: hider. πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT moves somewhat randomly with abrupt changes in direction. This can make it hard for the opponent to keep seeing it. πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT on the other hand moves very specifically away from the co-player and often up the ramp and onto the side of the platform opposite the co-player. This forces the co-player to go around the ramp. ##### Hide and seek: seeker. πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT searches for the co-player throughout the world and then stands still once the co-player is in its vision. It does not anticipate the co-player’s movement as it is about to come out of its vision. πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT prefers to continuously follow the co-player in order to be right next to it. In this way, it rarely lets the co-player out of its vision. ##### King of the hill. In this two-player task, the agent gets a reward if it is the only player at the top of the hill (touching the white floor). Once they get to the top of the hill, both πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT stay there and are able to push away the co-player whenever it comes near. However, πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT sometimes fails to navigate to the top of the hill, getting stuck in a loop. πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is more consistent in its navigational abilities to get to the top of the hill. ![Refer to caption](/html/2107.12808/assets/x1.png) Figure 19: (Top) From the left: rendering of the world; a goal composed of 3 options, each represented as a single predicate; Plots of the internal value function predictions of the GOAT module, with the colours corresponding to specific options. (Middle) Call-outs of 5 situations, from the perspective of the agent. (Bottom) A Kohonen Network representing the activity of the GOAT module (Section [6.6](#S6.SS6 "6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")). The four coloured circles represent the Kohonen Neurons activity (from top): whether the agent is early in the episode (yellow), if it is optimistic about future rewards (purple), if it thinks it is in a rewarding state (cyan), if it thinks multiple atoms are missing (orange). See Figure [28](#S6.F28 "Figure 28 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") for more details. ##### XRPS Counter Yellow Sphere. In XRPS (Section [3.2.3](#S3.SS2.SSS3 "3.2.3 Game Diversity ‣ 3.2 Game Properties ‣ 3 Environment Properties ‣ Open-Ended Learning Leads to Generally Capable Agents")), the agent can get points for holding any sphere, as long as its colour is not countered by the colour of the sphere the opponent is holding. However, the opponent player is goal-conditioned to hold the yellow sphere only. πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT tends to hold a sphere at random from the ones available. When this happens to be the black sphere, it gets no reward due to the co-player countering it with the yellow sphere. πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT on the other hand notices the co-player holding the yellow sphere and counters it by stealing the yellow sphere and holding it itself. It succeeds at holding it while the co-player tries to get it back. However, neither agent explicitly seeks to hold the purple sphere which would counter the opponent holding the yellow sphere. ##### Stop rolling. In this task, the agents have to keep a sphere from rolling to the bottom of a slope. The agents only get a reward if the sphere is not touching the bottom floor and is not being held. πG4subscript𝜋subscriptG4\pi\_{\mathrm{G}\_{4}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT simply lifts the sphere up in the air and lets it drop, gaining rewards for the brief moments when the sphere is dropping. πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT throws the sphere up the slope and then tries to block it from rolling down with its body. Often, πG5subscript𝜋subscriptG5\pi\_{\mathrm{G}\_{5}}italic\_π start\_POSTSUBSCRIPT roman\_G start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT manages to corner the sphere between its body and the wall as the sphere is on the slope and scores rewards for the remainder of the episode without moving. #### 6.4.2 Behavioural case studies Let us now focus on 3 specific case studies showing interesting emergent behaviours. ![Refer to caption](/html/2107.12808/assets/x2.png) Figure 20: (Top) From the left: rendering of the world; a goal composed of one option; Plots of the internal value function prediction of the agent. (Middle) Call-outs of 5 situations, from the perspective of the agent. (Bottom) A Kohonen Network representing the activity of the GOAT module (Section [6.6](#S6.SS6 "6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")). The four coloured circles represent the Kohonen Neurons activity (from top): whether the agent is early in the episode (yellow), if it is optimistic about future rewards (purple), if it thinks it is in a rewarding state (cyan), if it thinks multiple atoms are missing (orange). See Figure [28](#S6.F28 "Figure 28 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") for more details. ##### On-the-fly option evaluation In Figure [19](#S6.F19 "Figure 19 ‣ King of the hill. ‣ 6.4.1 Hand-authored tasks ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we see an agent trying to solve a task with a goal consisting of 3 possible options. Initially, the agent does not see a yellow sphere, but it does see a black pyramid and the orange floor. Its third option rewards the agent for placing the black pyramid on the orange floor, and looking at the agent’s internal option-values prediction, we see that indeed the value of the whole goal 𝐯^t[0]superscriptsubscript^𝐯𝑡delimited-[]0\widehat{\mathbf{v}}\_{t}^{[0]}over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 0 ] end\_POSTSUPERSCRIPT (violet curve) is closest to the third option value 𝐯^t[3]superscriptsubscript^𝐯𝑡delimited-[]3\widehat{\mathbf{v}}\_{t}^{[3]}over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 3 ] end\_POSTSUPERSCRIPT (green curve). Around 2.5s into the episode, the agent sees a yellow sphere, which leads to a dramatic increase in its internal prediction of what would happen if it was to satisfy option 1 instead (𝐯^t[1]superscriptsubscript^𝐯𝑡delimited-[]1\widehat{\mathbf{v}}\_{t}^{[1]}over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 1 ] end\_POSTSUPERSCRIPT, blue curve), which rewards the agent for placing the black pyramid near the yellow sphere. As a result, the internal value function of the whole game switches to upper bound the first option, and rather than navigating to the orange floor, the agent brings the black pyramid next to the sphere. This case study exemplifies the internal reasoning of the GOAT module, hinting at intentional decisions about which options to satisfy based on the current state of the environment. ##### Tool use In Figure [20](#S6.F20 "Figure 20 ‣ 6.4.2 Behavioural case studies ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we see an agent placed in a world, where it needs to get near to a purple pyramid placed on a higher floor. However, in this world there is no ramp leading to the upper floor – this initial lack of accessibility is impossible to occur during training due to the procedural world generation process constraints. We observe the agent initially trying to move around the red block, looking for a ramp. It starts to throw various objects around, which can either be interpreted as looking for a ramp hidden underneath, or simply an emergent heuristic behaviour of trying to increase the entropy of the environment in a situation when the agent does not know what to do. Around 5 seconds into the episode a slab thrown by an agent lands in the position partially supported by the upper floor, and the agent uses a freezing gadget to keep it in place. A moment later the agent can see a target purple pyramid in front of it with a frozen object looking like a ramp leading to the purple pyramid’s floor, and its internal value estimate rapidly increases, suggesting that the agent understands that it has found a solution to the task. The agent navigates onto the frozen object and reaches its goal. We can see that the internal representation activity (described in Section [6.6](#S6.SS6 "6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")) at 10 seconds is very similar to the final internal activity from the previous case study – we recognise this visual pattern as emerging when an agent is in a *content* state. ![Refer to caption](/html/2107.12808/assets/x3.png) Figure 21: (Top) From the left: rendering of the world; a goal composed of one options; Plots of the internal value function prediction of the agent. (Middle) Call-outs of 5 situations, from the perspective of the agent. (Bottom) A Kohonen Network representing the activity of the GOAT module (Section [6.6](#S6.SS6 "6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")). The four coloured circles represent the Kohonen Neurons activity (from top): whether the agent is early in the episode (yellow), if it is optimistic about future rewards (purple), if it thinks it is in a rewarding state (cyan), if it thinks multiple atoms are missing (orange). See Figure [28](#S6.F28 "Figure 28 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") for more details. ##### Experimentation Figure [21](#S6.F21 "Figure 21 ‣ Tool use ‣ 6.4.2 Behavioural case studies ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") is a final case study, where an agent is placed in a big open room, with most of the objects removed from the reachable space, and only 3 cubes left. The task the agent is facing is to put the black cube near the purple cube, the yellow cube near the purple cube, without putting the black and yellow cubes near each other. This simple logical puzzle requires an agent to figure out that there is a spatial ordering that satisfies these principles: a line with the yellow cube, followed by the purple cube, followed by the black cube. Note, that whilst this does not look like a complex problem on a predicate level, it is a very hard exploration problem due to the physical instantiation of XLand tasks – namely | | | | | --- | --- | --- | | | #{𝐬:r𝐠(𝐬)=1}#{𝐬:𝐬∈𝒮}≪#{ϕ(𝐬):r𝐠(𝐬)=1}Nϕ.much-less-than#conditional-set𝐬subscript𝑟𝐠𝐬1#conditional-set𝐬𝐬𝒮#conditional-setitalic-ϕ𝐬subscript𝑟𝐠𝐬1subscript𝑁italic-ϕ\tfrac{\#\{\mathbf{s}:r\_{\mathbf{g}}(\mathbf{s})=1\}}{\#\{\mathbf{s}:\mathbf{s}\in\mathcal{S}\}}\ll\tfrac{\#\{\phi(\mathbf{s}):r\_{\mathbf{g}}(\mathbf{s})=1\}}{N\_{\phi}}.divide start\_ARG # { bold\_s : italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) = 1 } end\_ARG start\_ARG # { bold\_s : bold\_s ∈ caligraphic\_S } end\_ARG ≪ divide start\_ARG # { italic\_ϕ ( bold\_s ) : italic\_r start\_POSTSUBSCRIPT bold\_g end\_POSTSUBSCRIPT ( bold\_s ) = 1 } end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT end\_ARG . | | From the agent’s behaviour and internal value we can hypothesise that the agent is initially confused. It starts by bringing the cubes together. Then at around 10 seconds we can see it visually inspecting the scene with the yellow and black cubes too close, after which it tries to reshuffle them. This reshuffling process is repeated multiple times for the next few dozen seconds, until eventually around 50 seconds into the episode, the agent stumbles upon a spatial arrangement of the cubes that satisfies the goal, which the agent again inspects visually. Whilst still clearly not content when it comes looking at the agent’s internal state/value prediction, the agent keeps the objects in the rewarding state and stops shuffling the cubes. This within-episode experimentation behaviour could be a general heuristic *fallback* behaviour – when it lacks the ability to 0-shot generalise through understanding, it plays with the objects, experiments, and visually verifies if it solved the task – all of this as an emergent behaviour, a potential consequence of an open-ended learning process. Note, that agent does not perceive the reward, it has to infer it purely based on the observations. #### 6.4.3 Multi-agent We now investigate some emergent multiplayer dynamics between agents playing in specific probe games. We take 13 agent checkpoints through training of the final (5th) generation of our agent (checkpoint 1 is the earliest in training through to checkpoint 13 which is the latest in training). For each of the probe scenarios described below, we play every single pair of checkpointed policies against each other. This way we obtain 132=169superscript13216913^{2}=16913 start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT = 169 matchups, and evaluate each pair of players on 1000 different worlds (to marginalise over physical instantiation), allowing us to study the development of strategies, social behaviours and learning dynamics of the agents in these games. Note, that the agent was never trained against these checkpoints, the only co-players it ever experienced during training were from the previous generations. More details can be found in Section [A.9](#A1.SS9 "A.9 Multi-agent analysis ‣ Feedback loop that affects the policy. ‣ A.8 GOAT ‣ A.7 Population Based Training ‣ External value function matching. ‣ A.6.1 Auxiliary losses ‣ A.6 Network architecture ‣ A.5 Distillation ‣ A.4 Reinforcement Learning ‣ A.3 Holding out tasks from training ‣ A.2.6 PCA projection ‣ A.2.5 Generation of a 3 player game ‣ A.2.4 Creating alike games ‣ A.2.3 Generating games ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). ![Refer to caption](/html/2107.12808/assets/figures/Hide_and_seek.png) Figure 22: (Left) Payoffs for the game of Hide and Seek played by checkpoints of the agent through training from start (1) to end (13), marginalised across 1000 different worlds, from the perspective of the seeker (top) and hider (bottom) player roles. (White is low, colour is high). (Right) The evolution of the Nash equilibrium distribution through training for each role of the player. One can note the back-and-forth dynamic of the hider and seeker improving over time. The agent never trained with these checkpoints. ##### Hide and Seek We revisit the game of hide and seek in this new evaluation context. It is an asymmetric, imbalanced, fully competitive game. Figure [22](#S6.F22 "Figure 22 ‣ 6.4.3 Multi-agent ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") shows the results of this experiment. With more training (later checkpoints), the results show the agents keep improving in the execution of both hider and seeker roles, showing a somewhat transitive strength progression, without exhibiting forgetting that can occur in multi-agent training (Vinyals et al., [2019](#bib.bib107); Czarnecki et al., [2020](#bib.bib20)), and the Nash equilibrium is mostly centered around the newest agents. We can also see that there is a noticeable back-and-forth dynamic between the seeker and hider strategy, as initially the Nash equilibrium for the seeker stays at checkpoint 1 (early in training), whilst the hider Nash equilibrium keeps moving to the newest checkpoint. This suggests that the agent is gradually improving its hiding strategy. Later, we see the opposite – the hider Nash equilibrium stops moving, while the seeker Nash equilibrium keeps improving, with this switch happening multiple times. Note that this is being observed without the agents ever training against each other, thus we hypothesise that these developments have to be coming from agents acquiring new behaviours and strategies in other parts of XLand task space. In particular, it is worth noting that during training, whilst the agent does not play against its checkpoints, and even though the game of hide and seek itself is not a part of the training space, agents are facing games where their goal is to “see the other player”, but the other player’s goal will be something else. Consequently, even though they only train against a few instances of other agents, the space of behaviours they can encounter is enormous because these agents are themselves conditioned on a variety of goals. ##### Conflict Avoidance We hypothesise that as training progresses agents might develop the behaviour of avoiding conflict with other agents in the situations where there is an alternative non-conflicting option to be satisfied. We create a simple game, where an agent can choose to place one of two spheres on a specific floor, while the other agent wants to put one of these spheres on a different floor. With both spheres being equidistant from a target floor, the only reason to pick the non-conflicting sphere is in order to avoid conflict with the other agent. In Figure [23](#S6.F23 "Figure 23 ‣ Conflict Avoidance ‣ 6.4.3 Multi-agent ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we can see that as the agent trains, it exhibits more and more conflict-avoiding behaviour, even though on average this does not necessarily lead to an increase in return on this particular task. However, empirically when early not-conflict-avoiding checkpoints play with increasingly trained checkpoints, they achieve a decreasing amount of reward. Note, that the agents are not training against each other, meaning that this development in behavioural response is purely an effect of the dynamic training distribution encountered during the open-ended learning process. ![Refer to caption](/html/2107.12808/assets/figures/Non_conflict.png) Figure 23: (Left) Probabilities of each agent choosing to avoid conflict and the corresponding normalised scores, as a function of their total training time (1 meaning least trained, and 13 meaning the most trained). (White is low probability, black is high probability). (Right) Fraction of conflict avoiding behaviours (solid curve) and the corresponding normalised score (dashed curve). The agent becomes more conflict avoiding over time whilst preserving its performance when matched with a copy of itself (red curve), while the earlier agent playing against later agents is not avoiding conflict and its performance also keeps decreasing (blue curve). ##### Chicken Game In this experiment, we create an XLand version of a game-theoretic social dilemma called *Chicken*. In this setup, each agent can choose to either cooperate with its co-player or to try to dominate it. We observe two interesting trends with respect to the tendency to seek cooperative solutions in Figure [24](#S6.F24 "Figure 24 ‣ Chicken Game ‣ 6.4.3 Multi-agent ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). First, if an agent is playing with a checkpoint from very early in training, it tends to dominate it more. On the other hand, when playing with a copy of itself (self-play) its tendency to collaborate increases over training time. One simple explanation of this phenomenon is that for cooperation to work, both sides need to be capable of doing so. Consequently, it is perhaps harder to cooperate with a less capable agent. However, once facing someone of exactly same strength (self-play) the collaborative solution becomes preferred. ![Refer to caption](/html/2107.12808/assets/figures/Chicken.png) Figure 24: (Left) Probabilities of each agent choosing to collaborate or dominate, as a function of their total training time (1 meaning least trained, and 13 meaning the most trained). (White is low probability, black is high probability). (Right) Fraction of collaborative behaviours in a Chicken-like game through agent training. The agent becomes more collaborative over time when matched with a copy of itself (red curve), and dominates more with earlier versions of itself (blue curve). #### 6.4.4 Goal Interventions During training our agents always received a single goal throughout an episode, the same goal at every timestep. We study whether the agent is able to adapt on-the-fly if this property is broken, and the goal changes mid-way through a single episode. ![Refer to caption](/html/2107.12808/assets/figures/goal_swap.png) Figure 25: Performance of the agent in 0-shot generalisation experiments where the goal of the agent is changed in the middle of an episode. Note that agents never perceived dynamically set goals during training. We sample 500500500500 tasks all consisting of single option, one predicate games from the test set. We run the agent for an extended episode of 3/2 length of a regular episode, where in the first 1/3 the agent is given one goal (where we ignore its performance), and then we change the goal supplied to the agent to a different one. To simplify the setting, the co-players use the noop-policy, simulating a single-player game. In Figure [25](#S6.F25 "Figure 25 ‣ 6.4.4 Goal Interventions ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we compare the normalised score of the agent evaluated in this scenario with the agent playing the same game but whose internal state is reset when the goal changes to simulate starting the episode from scratch with a fixed goal. We also show the performance of the agent taking random actions for reference. We notice that the performance of the agent with the changed goal is almost exactly the same as with a fixed goal, showing robustness to goal changes. #### 6.4.5 Failed Hand-authored Tasks Whilst there are many tasks the agent participates in, there are also some hand-authored tasks the agent does not, never achieving a single reward. Some examples are: ##### Gap tasks Similar to the task in Figure [20](#S6.F20 "Figure 20 ‣ 6.4.2 Behavioural case studies ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"), in this task there is an unreachable object which the agent is tasked with being near. The object is unreachable due to the existence of a chasm between the agent and object, with no escape route (once agent falls in the chasm, it is stuck). This task requires the agent to build a ramp to navigate over to reach the object. It is worth noting that during training no such inescapable regions exist. Our agents fall into the chasm, and as a result get trapped. It suggests that agents assume that they cannot get trapped. ##### Multiple ramp-building tasks Whilst some tasks do show successful ramp building (Figure [20](#S6.F20 "Figure 20 ‣ 6.4.2 Behavioural case studies ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")), some hand-authored tasks require multiple ramps to be built to navigate up multiple floors which are inaccessible. In these tasks the agent fails. ##### Following task One hand-authored task is designed such that the co-player’s goal is to be near the agent, whilst the agent’s goal is to place the opponent on a specific floor. This is very similar to the test tasks that are impossible even for a human, however in this task the co-player policy acts in a way which follows the agent’s player. The agent fails to lead the co-player to the target floor, lacking the theory-of-mind to manipulate the co-player’s movements. Since an agent does not perceive the goal of the co-player, the only way to succeed in this task would be to experiment with the co-player’s behaviour, which our agent does not do. ![Refer to caption](/html/2107.12808/assets/figures/finetuned.png) Figure 26: Comparison of three agents from different training regimes on a range of hand-authored levels. Scratch: An agent trained from scratch for 100 million steps. Zero-shot: the agent trained using our methodology and evaluated on these held out levels zero-shot. Fine-tuned: the same agent but trained for an additional 100 million steps on the level. 100 million steps is equivalent to 30 minutes of wall-clock time in our setup. This rapid finetuning improves the agent score significantly compared to zero-shot, and in the majority of cases training from scratch does not achieve any reward. ### 6.5 Finetuning for Transfer Throughout this section we have so far demonstrated zero-shot generalisation to new tasks. The breadth of coverage of the agent’s behaviour suggests that whilst zero-shot performance can be achieved on many out-of-distribution test tasks, there is the potential for very quick adaptation with finetuning. Using a simple training setup – without PBT, dynamic task generation, or any other hyperparameter tuning – we finetune the weights of the generally capable agent previously analysed for 100 million steps (approximately 30 minutes of training) on a number of tasks from the hand-authored set. The results are shown in Figure [26](#S6.F26 "Figure 26 ‣ Following task ‣ 6.4.5 Failed Hand-authored Tasks ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). The results show in all cases an increase in reward achieved by the finetuned agent compared to the zero-shot performance, with the finetuned agent showing a drastic improvement of 340% on average. By construction, the maximum reward that could ever be achieved on an XLand task of 900 timesteps is 𝐕\*(𝐱)≤900superscript𝐕𝐱900\mathbf{V}^{\*}(\mathbf{x})\leq 900bold\_V start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( bold\_x ) ≤ 900. Using 900 as an upper bound of optimal reward per task (which is a very loose one, since even an optimal policy needs some time to reach objects of interest etc.), learning from scratch scores at least 9% of the performance of the optimal policy, zero-shot performance is at the level of 39% and the finetuned agent achieves 77%. With the same computational budget and 30 minutes of training, learning from scratch on these tasks fails in the majority of tasks. The task *Make Follow Easy* is described in the previous section as one of the tasks the agent fails to zero-shot generalise to. With 30 minutes of finetuning, the agent is able to achieve reward consistently in this task, learning successfully to coax the co-player to the target floor. These experiments show the potential of massively multi-task RL pre-training, as is performed in this work, for the subsequent transfer with finetuning to many different downstream target tasks. ### 6.6 Representation analysis We now move our attention towards understanding how agents operate and the way they represent the simulated environment. ##### Kohonen Network There are a multitude of methods to analyse the internal representations of agents and understand what knowledge is encoded in neuron activations (Goh et al., [2021](#bib.bib30)) applicable in various situations. We utilise Kohonen Networks (also known as Self-Organising Maps) (Kohonen, [1982](#bib.bib52)) to investigate the high dimensional representations learned by our agents. This technique unifies three types of analysis, allowing us to: * • visualise the space of internal representations wrt. some labelling (often done with T-SNE (Van der Maaten and Hinton, [2008](#bib.bib103))), * • visualise the current state of the agent (*i.e.* a single 𝐡tsubscript𝐡𝑡\mathbf{h}\_{t}bold\_h start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT) (previously done, for example, with a Neural Response Map (Jaderberg et al., [2019](#bib.bib44))), * • conduct simple concept decoding tests (often done with linear classifier probes (Alain and Bengio, [2017](#bib.bib1)) or single neuron activity analysis (Quiroga et al., [2005](#bib.bib75))). A Kohonen Network is a collection of K𝐾Kitalic\_K neurons 𝔥i∈ℌ:=ℝnsubscript𝔥𝑖ℌassignsuperscriptℝ𝑛\mathfrak{h}\_{i}\in\mathfrak{H}:=\mathbb{R}^{n}fraktur\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ fraktur\_H := blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT trained to represent a dataset composed of points xj∈ℝnsubscript𝑥𝑗superscriptℝ𝑛x\_{j}\in\mathbb{R}^{n}italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT under some notion of distance (here we use standard Euclidean distance), using a pre-determined structure between the neurons that prescribe the geometry one is looking for in the dataset. In our work we use neurons arranged as a grid filling a 2-dimensional circle, giving each neuron a fixed position 𝔨i∈𝔎:=ℝ2subscript𝔨𝑖𝔎assignsuperscriptℝ2\mathfrak{k}\_{i}\in\mathfrak{K}:=\mathbb{R}^{2}fraktur\_k start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ fraktur\_K := blackboard\_R start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT. To train the network, we iteratively minimise the following per iteration loss using gradient descent | | | | | | --- | --- | --- | --- | | | ℓℌ(𝔥)superscriptℓℌ𝔥\displaystyle\ell^{\mathfrak{H}}(\mathfrak{h})roman\_ℓ start\_POSTSUPERSCRIPT fraktur\_H end\_POSTSUPERSCRIPT ( fraktur\_h ) | :=∑i,jmax⁡{0,dmaxdmax−‖𝔨i−𝔨ι(xj)‖}‖xj−𝔥i‖2assignabsentsubscript𝑖𝑗0subscriptdmaxsubscriptdmaxnormsubscript𝔨𝑖subscript𝔨𝜄subscript𝑥𝑗superscriptnormsubscript𝑥𝑗subscript𝔥𝑖2\displaystyle:=\sum\_{i,j}\max\left\{0,\tfrac{\mathrm{d}\_{\mathrm{max}}}{\mathrm{d}\_{\mathrm{max}}-\|\mathfrak{k}\_{i}-\mathfrak{k}\_{\iota(x\_{j})}\|}\right\}\|x\_{j}-\mathfrak{h}\_{i}\|^{2}:= ∑ start\_POSTSUBSCRIPT italic\_i , italic\_j end\_POSTSUBSCRIPT roman\_max { 0 , divide start\_ARG roman\_d start\_POSTSUBSCRIPT roman\_max end\_POSTSUBSCRIPT end\_ARG start\_ARG roman\_d start\_POSTSUBSCRIPT roman\_max end\_POSTSUBSCRIPT - ∥ fraktur\_k start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT - fraktur\_k start\_POSTSUBSCRIPT italic\_ι ( italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) end\_POSTSUBSCRIPT ∥ end\_ARG } ∥ italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT - fraktur\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | | | ι(x)𝜄𝑥\displaystyle\iota(x)italic\_ι ( italic\_x ) | :=arg⁡minj⁡‖x−𝔥j‖2.assignabsentsubscript𝑗superscriptnorm𝑥subscript𝔥𝑗2\displaystyle:=\arg\min\_{j}\|x-\mathfrak{h}\_{j}\|^{2}.:= roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ italic\_x - fraktur\_h start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∥ start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . | | Intuitively, for each point in the dataset, the closest Kohonen Neuron is selected (the *winning* neuron) and moves the neuron a bit closer to this data point, together with other neurons that are nearby in grid 𝔎𝔎\mathfrak{K}fraktur\_K space, with their adaptation downscaled proportionally to how far away from the winning neuron they are. By fitting the Kohonen Network to the data in this manner, we are asking *what 2d circle-like shape can fit into the n-dimensional dataset in such a way that its position corresponds to the density of the data?* More details can be found in Section [A.11](#A1.SS11 "A.11 Representation analysis ‣ A.10 Handauthored levels ‣ A.2.2 Atomic predicates ‣ A.2 Games ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents"). ![Refer to caption](/html/2107.12808/assets/figures/kohonen_maps.png) Figure 27: Internal representation analysis of the agent. We use Kohonen Network representations of various properties for three different modules of the agent (LSTM, goal embedding, GOAT). Within a Kohonen Network, the bright yellow colour denotes states where the property is true, and blue where it is false. We shade out plots which represent combinations of properties and modules where the given property is not represented in a statistically significant manner by the output of the module (see Section [6.6](#S6.SS6 "6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents")). We gather 30k episodes of our trained agent across tasks sampled uniformly from the test set, and use activations xjsubscript𝑥𝑗x\_{j}italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT of the outputs of the LSTM, goal embedding module, and GOAT module to train three Kohonen Networks respectively. Next, we identified a collection of binary properties corresponding to state 𝐬jsubscript𝐬𝑗\mathbf{s}\_{j}bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT represented in these episodes, *e.g.* whether it is early in the episode, whether the agent is holding an object, whether the agent is in a rewarding state, *etc.* For each probe property p𝑝pitalic\_p we assign a colour to a specific Kohonen Neuron 𝔥isubscript𝔥𝑖\mathfrak{h}\_{i}fraktur\_h start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT given by the fraction of data points containing the property relative to all the states that were mapped to this neuron: | | | | | --- | --- | --- | | | cpi:=#{xj:i=ι(xj)∧p(𝐬j)}#{xj:i=ι(xj)}.assignsubscript𝑐𝑝𝑖#conditional-setsubscript𝑥𝑗𝑖𝜄subscript𝑥𝑗𝑝subscript𝐬𝑗#conditional-setsubscript𝑥𝑗𝑖𝜄subscript𝑥𝑗c\_{pi}:=\tfrac{\#\{x\_{j}:i=\iota(x\_{j})\wedge p(\mathbf{s}\_{j})\}}{\#\{x\_{j}:i=\iota(x\_{j})\}}.italic\_c start\_POSTSUBSCRIPT italic\_p italic\_i end\_POSTSUBSCRIPT := divide start\_ARG # { italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT : italic\_i = italic\_ι ( italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ∧ italic\_p ( bold\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) } end\_ARG start\_ARG # { italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT : italic\_i = italic\_ι ( italic\_x start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) } end\_ARG . | | In Figure [27](#S6.F27 "Figure 27 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") one can see qualitatively that different properties are clearly represented in different parts of the network. To quantify this, we compute the *Balanced Accuracy* (BAC) of a classifier which assigns a label to each state by a majority vote of labels inside each cluster (*i.e.* set of points mapped onto a given neuron), formally: | | | | | --- | --- | --- | | | p^Tmodule(x):=cpι(x)≥Tmoduleassignsubscript^𝑝subscriptTmodule𝑥subscript𝑐𝑝𝜄𝑥subscriptTmodule\hat{p}\_{\mathrm{T}\_{\mathrm{module}}}(x):=c\_{p\iota(x)}\geq\mathrm{T}\_{\mathrm{module}}over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T start\_POSTSUBSCRIPT roman\_module end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) := italic\_c start\_POSTSUBSCRIPT italic\_p italic\_ι ( italic\_x ) end\_POSTSUBSCRIPT ≥ roman\_T start\_POSTSUBSCRIPT roman\_module end\_POSTSUBSCRIPT | | for some threshold Tmodule∈[0,1]subscriptTmodule01\mathrm{T}\_{\mathrm{module}}\in[0,1]roman\_T start\_POSTSUBSCRIPT roman\_module end\_POSTSUBSCRIPT ∈ [ 0 , 1 ], and we compute | | | | | --- | --- | --- | | | BAC(p^,p):=maxT⁡12[TP[p^T,p]TP[p^T,p]+FN[p^T,p]+TN[p^T,p]TN[p^T,p]+FP[p^T,p]],assignBAC^𝑝𝑝subscriptT12delimited-[]TPsubscript^𝑝T𝑝TPsubscript^𝑝T𝑝FNsubscript^𝑝T𝑝TNsubscript^𝑝T𝑝TNsubscript^𝑝T𝑝FPsubscript^𝑝T𝑝\mathrm{BAC}(\hat{p},p):=\max\_{\mathrm{T}}\tfrac{1}{2}\left[\tfrac{\mathrm{TP}[\hat{p}\_{\mathrm{T}},p]}{\mathrm{TP}[\hat{p}\_{\mathrm{T}},p]+\mathrm{FN}[\hat{p}\_{\mathrm{T}},p]}+\tfrac{\mathrm{TN}[\hat{p}\_{\mathrm{T}},p]}{\mathrm{TN}[\hat{p}\_{\mathrm{T}},p]+\mathrm{FP}[\hat{p}\_{\mathrm{T}},p]}\right],roman\_BAC ( over^ start\_ARG italic\_p end\_ARG , italic\_p ) := roman\_max start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG [ divide start\_ARG roman\_TP [ over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG start\_ARG roman\_TP [ over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT , italic\_p ] + roman\_FN [ over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG + divide start\_ARG roman\_TN [ over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG start\_ARG roman\_TN [ over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT , italic\_p ] + roman\_FP [ over^ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG ] , | | where TP,TN,FP,FNTPTNFPFN\mathrm{TP,TN,FP,FN}roman\_TP , roman\_TN , roman\_FP , roman\_FN is the fraction of true positives, true negatives, false positives and false negatives from a predictor p^^𝑝\hat{p}over^ start\_ARG italic\_p end\_ARG and the ground truth p𝑝pitalic\_p. We decide that the information is present in a specific representation if and only if BAC(p^,p)≥0.8BAC^𝑝𝑝0.8\mathrm{BAC}(\hat{p},p)\geq 0.8roman\_BAC ( over^ start\_ARG italic\_p end\_ARG , italic\_p ) ≥ 0.8, meaning that if we were to randomly select a state where the property is true or false, we could with at least 80%percent8080\%80 % probability correctly guess this label based purely on the colour of the corresponding Kohonen Neuron. Using this quantitative measure of information present in Figure [27](#S6.F27 "Figure 27 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"), we can first see that the notion of the flow of time, and whether an agent is holding an object is clearly visible in the LSTM cell output, but is completely missing from the goal embedding module. It is however preserved at the output of the GOAT module, meaning that this information is probably useful for further policy/value predictions. We can also see that the agent clearly internally represents that it is in a rewarding state. This is significant given that the agent does not receive its reward, nor the past rewards, as an input. The reward signal is used purely as part of RL training, so during inference the agent needs to be able to infer this information from its observations. Consequently, this implies that the agent is capable of using its RGB input to reason about the relations between objects, and their correspondence to the logical structure of the goal at hand. We further investigate whether this representation of a rewarding state is consistent with the agent’s internal atomic predicate prediction (denoted in Figure [27](#S6.F27 "Figure 27 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") as *rewarding state and knows it*), where we further require all the atomic predicate predictions that are relevant to the rewarding state (*i.e.* selected option) to be correct. We can see that this information is also very well represented. On the other hand, if we ask whether the agent represents the atomic predicates states of all relations involved in the goal (*i.e.* the atomic predicate states contributing to other options, that agent might not be pursuing right now) we see this information is not present in any of the modules we investigated. This suggests that agent has a very good, but focused, understanding of the state of the world, and attends mainly to the aspects of state that are relevant to the option it is currently following. We can ask an analogous question of whether the agent is aware of how many atomic predicates states it needs to change before it can obtain a reward. The distinction between having to flip one atomic predicate or more is clearly encoded in the goal embedding module – with a small island of activations in the upper right corner corresponding to multiple missing atomic predicates, with the smooth big region around it corresponds to needing to flip exactly one. While this information is clearly preserved in the GOAT module output, we can see that they are mapped onto similar regions, suggesting that as the information is processed through the network and reaches the point where only policy/value needs to be produced, this distinction is potentially less relevant. Finally, details regarding the exact game that an agent is playing (*e.g.* number of options involved) is clearly represented in its goal embedding module, but is then not propagated to the GOAT module, suggesting that whatever decision needs to be made that affects the policy/value can be done solely at the goal embedding level, and does not need to be integrated with the LSTM output. ![Refer to caption](/html/2107.12808/assets/figures/kohonen_neurons.png) Figure 28: Internal representation analysis of the agent. The Kohonen Neurons encode four well represented concepts from Figure [27](#S6.F27 "Figure 27 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). The kernel density estimation plots represent the density of the activity of the neuron when the concept is true (in colour) or false (in gray). ##### Kohonen Neurons An associated question that one could ask is whether there exists a single Kohonen Neuron coding for a specific property. Note that a Kohonen Neuron does not correspond to a single neuron in a neural network of the agent, but rather a collection of them, found using unsupervised training (and thus more related to general notions of distributed sparse representations than so called grandmother cells Connor ([2005](#bib.bib18))). This can be seen more as a distributed concept, though not simply a linear classifier probe (Alain and Bengio, [2017](#bib.bib1)), as the Kohonen Neuron is found without access to the corresponding labels. | | | | | --- | --- | --- | | | p¯iTneuron(x):=cpi≥Tneuron,assignsubscript¯𝑝𝑖subscriptTneuron𝑥subscript𝑐𝑝𝑖subscriptTneuron\bar{p}\_{i\mathrm{T}\_{\mathrm{neuron}}}(x):=c\_{pi}\geq\mathrm{T}\_{\mathrm{neuron}},over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T start\_POSTSUBSCRIPT roman\_neuron end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) := italic\_c start\_POSTSUBSCRIPT italic\_p italic\_i end\_POSTSUBSCRIPT ≥ roman\_T start\_POSTSUBSCRIPT roman\_neuron end\_POSTSUBSCRIPT , | | and | | | | | --- | --- | --- | | | BAC(p¯,p):=maxi,T⁡12[TP[p¯iT,p]TP[p¯iT,p]+FN[p¯iT,p]+TN[p¯iT,p]TN[p¯iT,p]+FP[p¯iT,p]].assignBAC¯𝑝𝑝subscript𝑖T12delimited-[]TPsubscript¯𝑝𝑖T𝑝TPsubscript¯𝑝𝑖T𝑝FNsubscript¯𝑝𝑖T𝑝TNsubscript¯𝑝𝑖T𝑝TNsubscript¯𝑝𝑖T𝑝FPsubscript¯𝑝𝑖T𝑝\mathrm{BAC}(\bar{p},p):=\max\_{i,\mathrm{T}}\tfrac{1}{2}\left[\tfrac{\mathrm{TP}[\bar{p}\_{i\mathrm{T}},p]}{\mathrm{TP}[\bar{p}\_{i\mathrm{T}},p]+\mathrm{FN}[\bar{p}\_{i\mathrm{T}},p]}+\tfrac{\mathrm{TN}[\bar{p}\_{i\mathrm{T}},p]}{\mathrm{TN}[\bar{p}\_{i\mathrm{T}},p]+\mathrm{FP}[\bar{p}\_{i\mathrm{T}},p]}\right].roman\_BAC ( over¯ start\_ARG italic\_p end\_ARG , italic\_p ) := roman\_max start\_POSTSUBSCRIPT italic\_i , roman\_T end\_POSTSUBSCRIPT divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG [ divide start\_ARG roman\_TP [ over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG start\_ARG roman\_TP [ over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T end\_POSTSUBSCRIPT , italic\_p ] + roman\_FN [ over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG + divide start\_ARG roman\_TN [ over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG start\_ARG roman\_TN [ over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T end\_POSTSUBSCRIPT , italic\_p ] + roman\_FP [ over¯ start\_ARG italic\_p end\_ARG start\_POSTSUBSCRIPT italic\_i roman\_T end\_POSTSUBSCRIPT , italic\_p ] end\_ARG ] . | | We note that for being early in the episode, having a high baseline, being in a rewarding state, and for multiple missing atomic predicates, we can identify corresponding Kohonen Neurons achieving BACBAC\mathrm{BAC}roman\_BAC of over 75%, Figure [28](#S6.F28 "Figure 28 ‣ Kohonen Network ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"). ##### Value consistency In Section [5.1](#S5.SS1 "5.1 Deep Reinforcement Learning ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents") we discussed *value consistency*, the fact that an optimal policy value of the game composed of multiple alternatives is always lower bounded by the maximum value of the optimal policy for each separate option. Whilst the agent is encouraged to preserve a similar property over its current policy, it is not fully enforced. We investigate how consistent the trained agent is in this respect by looking at its internal values for each option and computing Pr[𝐯^t[0]≥maxi>0⁡𝐯^t[i]].Prdelimited-[]superscriptsubscript^𝐯𝑡delimited-[]0subscript𝑖0superscriptsubscript^𝐯𝑡delimited-[]𝑖\mathrm{Pr}\left[\widehat{\mathbf{v}}\_{t}^{[0]}\geq\max\_{i>0}\widehat{\mathbf{v}}\_{t}^{[i]}\right].roman\_Pr [ over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ 0 ] end\_POSTSUPERSCRIPT ≥ roman\_max start\_POSTSUBSCRIPT italic\_i > 0 end\_POSTSUBSCRIPT over^ start\_ARG bold\_v end\_ARG start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT [ italic\_i ] end\_POSTSUPERSCRIPT ] . In Figure [29](#S6.F29 "Figure 29 ‣ Value consistency ‣ 6.6 Representation analysis ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents") we show the density estimation of episodes where a specific probability of value consistency occurs. In expectation, our agent is value consistent around 90% of the time (for the goals with more than one option, since by definition an agent is always value consistent with one option goals). Value consistency is clearly shown in a previously discussed example, Figure [19](#S6.F19 "Figure 19 ‣ King of the hill. ‣ 6.4.1 Hand-authored tasks ‣ 6.4 General Capabilities ‣ 6 Results and Analysis ‣ Open-Ended Learning Leads to Generally Capable Agents"), with the value of the full game upper bounding values of the individual options, even as the individual option values fluctuate. ![Refer to caption](/html/2107.12808/assets/figures/value_consistency.png) Figure 29: The kernel density estimation of the fraction of frames inside a single episode where the agent’s internal value estimation of the whole goal is lower bounded by the maximum value over options (*value consistency*, Theorem [5.1](#S5.Ex40 "Goal attention network ‣ 5.1 Deep Reinforcement Learning ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents")). We only consider goals with two and three options, as this property is trivially true for one option goals. 7 Related Work --------------- This work builds heavily upon the ideas of many related works. We now review some of these in the areas of multi-agent learning and progressive learning, iterative improvement and percentiles, procedural environment generation, curriculum over tasks, curriculum over goals, and world-agent co-evolution. ##### Multi-agent and progressive learning. Our environment is multi-agent, and as such we face challenges of multi-agent learning systems, characterised previously as non-stationarity, exploration, and interaction modelling (Lowe et al., [2017](#bib.bib57); Mahajan et al., [2019](#bib.bib58); Bowling, [2000](#bib.bib12)). Like others, we also see multi-agent reinforcement learning as a potential solution to other challenges, such as the design of autocurricula (Leibo et al., [2019](#bib.bib55)) or even end-to-end learning of pixel-perception based agents (Jaderberg et al., [2019](#bib.bib44)). The notion of generations of agents, forming a growing set (or *league* in Vinyals et al. ([2019](#bib.bib107))) of agents lies at the core of many multi-agent learning algorithms (Lanctot et al., [2017](#bib.bib53); Balduzzi et al., [2019](#bib.bib7)). The difference in this work is the utilisation of the generational split to encapsulate self-contained units of reinforcement learning such that the objective can change between generations, as well as the utilisation of a massive space of games being solved in parallel. This progressive growing of the set of agents on multi-task spaces is also related to progressive learning (Furlanello et al., [2018](#bib.bib28); Schwarz et al., [2018](#bib.bib87); Rusu et al., [2016](#bib.bib78)), as well as multi-task learning with knowledge sharing (Teh et al., [2017](#bib.bib100)). Sessa et al. ([2020](#bib.bib88)) proposes a mathematical framework of *contextual games*, which could be used to view XLand goal conditioned agents. They show an effective sampling strategy of scheduling games under an assumption of smoothness of mapping from contexts to optimal policies. From a formal standpoint the goal space of XLand forms a Boolean algebra and thus can benefit from exploitation of this structure (Nangue Tasse et al., [2020](#bib.bib65); Todorov, [2009](#bib.bib101); Van Niekerk et al., [2019](#bib.bib105)). Currently, we exploit these properties in our GOAT module, as well as how we navigate game space to create games with specific properties. Vezhnevets et al. ([2020](#bib.bib106)) studies architectures and auxiliary losses (Jaderberg et al., [2017b](#bib.bib43)) in a multi-agent setting with hindsight knowledge of agents’ goals, which could be applied to our setting. Leibo et al. ([2017](#bib.bib54)) studies sequential social dilemma, in particular trying to identify well known social dilemma classes (Robinson and Goforth, [2005](#bib.bib77)) in empirical payoffs emerging from RL simulations, which our multi-agent analysis draws upon. Automated identification of varied social dilemma in our setup is an interesting open research question. ##### Iterative improvement and percentiles. Iterative notions of improvements have been used, particularly in multi-agent systems, either explicitly by targeting the goal with respect to known opponents (McMahan et al., [2003](#bib.bib61); Balduzzi et al., [2019](#bib.bib7)), implicitly by using internal ranking between agents (Jaderberg et al., [2019](#bib.bib44)), or simply because of the reliance on self-play (Silver et al., [2016](#bib.bib91)). In this work we use similar ideas but applied to worlds and games in addition to other agents (co-players), and propose a way to deal with non-comparable reward scales of the resulting tasks. When thinking about general capability and catastrophic failure of policies, the field of robust and risk sensitive reinforcement learning (Borkar and Jain, [2010](#bib.bib11); Prashanth and Ghavamzadeh, [2013](#bib.bib74); Tamar et al., [2012](#bib.bib99)) has been analysing variability in obtained rewards to find safer solutions. In particular, percentile-based measures have been utilised (Filar et al., [1995](#bib.bib26); Delage and Mannor, [2010](#bib.bib23)) to ensure/target specific guarantees of a probability of obtaining a reward in a given task. In this work we use similar ideas on the level of distribution over tasks, rather than on the level of individual policy outcomes. The use of curves of normalised score with respect to percentiles to visualise and characterise performance is inspired by ROC curves (Hanley and McNeil, [1982](#bib.bib36)). ##### Procedural environment generation. Many previous works have used procedural generation and evolution to create interesting environments for players (both agents and humans). Togelius and Schmidhuber ([2008](#bib.bib102)) propose an evolving system to generate interesting rules for a game by selecting games in which random agents score poorly and trained agents score highly. Volz et al. ([2018](#bib.bib108)) use a Generative Adversarial Network (GAN, Goodfellow et al. ([2014](#bib.bib31))) to generate Super Mario Bros levels. They further search the latent space of the GAN using evolutionary methods to discover levels that are difficult but achievable for a previously trained agent. Justesen et al. ([2018](#bib.bib46)) train an agent in a procedurally generated environment and update a difficulty parameter based on the agent’s recent success rate – we make use of similar measures to influence task generation in our work. Grbic et al. ([2020](#bib.bib33)) evolve Minecraft levels, both via interactive and automated evolution. CPPN2GAN (Schrum et al., [2020](#bib.bib86)) generates large diverse game levels by combining GANs, Content producing Compositional Pattern Producing Networks (CPPNs, (Stanley, [2007a](#bib.bib95))) and the NEAT evolutionary algorithm (Stanley and Miikkulainen, [2002](#bib.bib97)). The GAN is first trained on a dataset of existing levels to reproduce individual rooms. A CPPN is then evolved to transform grid’s coordinate locations into a latent representation that can be input to the GAN. The CPPN is evolved to maximise metrics such as the length of the shortest path to solve a level. In PCGRL (Khalifa et al., [2020](#bib.bib49)), a deep RL agent is made to edit worlds in order to maximise a bespoke reward function, such as generating long paths for a maze. ##### Curriculum over tasks. Both our procedures for world-agent co-evolution (Section [A.1.1](#A1.SS1.SSS1 "A.1.1 Procedural World Generation ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")) and dynamic task generation are examples of automated curriculum learning (ACL, Portelas et al. ([2020b](#bib.bib73))). In ACL, the training distribution of the agent is automatically adapted throughout training. A number of methods attempt to use learning progress (Kaplan and Oudeyer, [2007](#bib.bib48); Schmidhuber, [2010](#bib.bib81)) on a task as a way to decide whether the task should be trained on or not (Graves et al., [2017](#bib.bib32)). In the context of reinforcement learning, this has been used to select tasks or task parameters (Matiisen et al., [2020](#bib.bib60); Portelas et al., [2020a](#bib.bib72); Kanitscheider et al., [2021](#bib.bib47)). OpenAI et al. ([2019](#bib.bib68)) automatically adapt the parameters of their environment for solving a Rubik’s cube with a robot hand. They start with an narrow domain distribution and continuously expand this distribution when the agent is seen to have good performance at its boundaries. Prioritised Experience Replay (Schaul et al., [2016](#bib.bib80)) changes the distribution with which experience is replayed by prioritising those with high Temporal Difference (TD) error. Similarly, Jiang et al. ([2020](#bib.bib45)) propose Prioritised Level Replay which samples new levels to play on based on the observed TD error in recent experience on those levels. In CARML, Jabri et al. ([2019](#bib.bib41)) adapt the task distribution to form a curriculum for meta-RL by maximising the information between a latent task variable and their corresponding trajectories. In PowerPlay, Schmidhuber ([2013](#bib.bib82)) propose a framework to continuously seek the simplest unsolved challenge to train on. The adaptation of curricula for many of these works use hand-crafted heuristics, as we do with dynamic task generation, however in our case the parameters of the heuristic itself are adapted with PBT. ##### Curriculum over goals. A large body of work is concerned with the training of goal-conditioned agents (Schaul et al., [2015](#bib.bib79)) in a single environment. In these past works, the goal usually consists of the position of the agent or a target observation to reach, however some previous work uses text goals (Colas et al., [2020](#bib.bib17)) for the agent similarly to this work. When the goal is a target observation, most methods acquire new goals by sampling observations previously generated in the environment: Nair et al. ([2018](#bib.bib64)) generate visual goals by training a Variational Auto-Encoder (Kingma and Welling, [2014](#bib.bib50)) over the generated experience. Hindsight Experience Replay (HER, Andrychowicz et al. ([2017](#bib.bib2))) trains a goal-conditioned agent by replaying trajectories with the agent conditioned on the goal that was achieved in the trajectory. Fang et al. ([2019](#bib.bib25)) add a curriculum to Hindsight Experience Replay by dynamically changing the selection of trajectories for replay. Pong et al. ([2020](#bib.bib71)) propose a method to increase the importance of rarely sampled observation as goals. Warde-Farley et al. ([2019](#bib.bib112)) propose a variety of goal achievement reward functions which measure how similar a state is to the goal state. Racanière et al. ([2020](#bib.bib76)) perform a curriculum over environment goals in randomly initialised 2D and 3D worlds. A setter generates goals for a solver agent. The setter minimises a few different losses which aim to yield a wide variety of tasks of various difficulties for the current agent policy. CURIOUS (Colas et al., [2019](#bib.bib16)) sets a curriculum over environment goals by prioritising goal spaces which have shown recent learning progress and then sampling goals uniformly over goal spaces. Florensa et al. ([2018](#bib.bib27)) propose an adversarial goal generation procedure in which a goal-GAN generates goals for locomotion tasks that the agent must solve. The objective of the goal setter is similar to that used in our world-agent co-evolution procedure: guarantee that the success probability is within a preset range. Zhang et al. ([2020](#bib.bib114)) choose goals where there is high epistemic uncertainty on the Q-function. AMIGo (Campero et al., [2021](#bib.bib14)) also generates a curriculum of goals for the agent but does so by looking at the current number of steps needed by the agent to reach the goal. In Asymmetric self-play (Sukhbaatar et al., [2018](#bib.bib98); OpenAI et al., [2021](#bib.bib69)), two agents interact in turn in the environment: Alice and Bob. Alice first plays in the environment and generates a trajectory. From there, Bob can either be tasked with returning the player to its original location, or, in a new episode, reaching the same state that Alice achieved. The self reward-play modification (Section [5.3](#S5.SS3 "5.3 Generational training ‣ 5 Learning Process ‣ Open-Ended Learning Leads to Generally Capable Agents")) can be seen as a sequential version of this within a single episode and the same agent playing both Alice and Bob. ##### World-agent co-evolution. Our procedure for world-agent co-evolution (Section [A.1.1](#A1.SS1.SSS1 "A.1.1 Procedural World Generation ‣ A.1 Worlds ‣ Appendix A Appendix ‣ Open-Ended Learning Leads to Generally Capable Agents")) shares similarity with POET (Wang et al., [2019](#bib.bib109), [2020](#bib.bib110)) and PAIRED (Dennis et al., [2020](#bib.bib24)). In all cases, the procedure generates a dynamic high-dimensional world distribution for agents. In POET, a population of environment-agent pairs is evolved through time. Agents are continuously trained on their paired environment. Occasionally, agents are transferred to other environments in the population. In PAIRED, two agents are coevolved: a protagonist agent and an antagonist agent. The protagonist agent attempts to solve tasks generated by the antagonist agent. The antagonist also plays in the generated environments. The difference between the average score of the protagonist and the best score of the antagonist across multiple trials is defined as the regret. The protagonist is trained to minimise this regret while the antagonist is trained to maximise it. Compared with both these methods, our proposed procedure is simpler: it only requires a single agent to be trained to solve tasks. We filter levels only based on the agent’s estimated probability of success. Finally, the use of the world-agent co-evolution process to create the base distribution for training and evaluation for the remainder of our learning process is an example of AI-generating algorithms (Clune, [2019](#bib.bib15)). 8 Conclusions -------------- In this work, we introduced an open-ended 3D simulated environment space for training and evaluating artificial agents. We showed that this environment space, XLand, spans a vast, diverse, and smooth task space, being composed of procedurally generated worlds and multiplayer games. We looked to create agents that are generally capable in this environment space – agents which do not catastrophically fail, are competent on many tasks, and exhibit broad ability rather than narrow expertise. An iteratively revised metric of normalised score percentiles on an evaluation set of tasks was used to characterise general capability, and a learning process to drive iterative improvement created. This learning process is composed of agents training with deep RL, on training task distributions that are dynamically generated in response to the agent’s behaviour. Populations of agents are trained sequentially, with each generation of agents distilling from the best agent in the previous generation, iteratively improving the frontier of normalised score percentiles, whilst redefining the metric itself – an open-ended learning process. Combining this environment space with such a learning process resulted in agents that appear to have broad ability across our held-out evaluation space, catastrophically failing on only a small percentage of tasks that are humanly impossible. We qualitatively and quantitatively characterised some of the emergent behaviours of this agent and saw general behavioural heuristics such as experimentation and success recognition, and the tendency to cooperate more with other competent agents, behaviours which appear to generalise to many out-of-distribution probe tasks. These behaviours are driven by rich internal representations that we analysed, showing clear representations of the structure and state of the goals they are tasked to follow. These results hint at the ability to train agents, without human demonstrations, which exhibit general capabilities across vast task spaces. Beyond zero-shot generalisation, the ability to quickly finetune these pretrained agents on complex out-of-distribution tasks was demonstrated clearly. We hope the presented methods and results pave the way for future work on creating ever more adaptive agents that are able to transfer to ever more complex tasks. Author Contributions -------------------- The following lists the main contributions of the authors to the work presented. Adam Stooke: Learning process development and research investigations. Anuj Mahajan: Agent analysis. Catarina Barros: Environment development and visuals. Charlie Deck: Environment development. Jakob Bauer: Infrastructure development, learning process development, research investigations, and technical management. Jakub Sygnowski: Infrastructure development, agent analysis, and research investigations. Maja Trebacz: Research investigations. Max Jaderberg: Learning process development, research investigations, manuscript, visuals, XLand concept, and team management. Michael Mathieu: Learning process development and research investigations. Nat McAleese: Infrastructure development and research investigations. Nathalie Bradley-Schmieg: Program management. Nathaniel Wong: Environment development and visuals. Nicolas Porcel: Environment development. Roberta Raileanu: Research investigations. Steph Hughes-Fitt: Program management. Valentin Dalibard: Learning process development, infrastructure development, research investigations, agent analysis, and manuscript. Wojciech Marian Czarnecki: Learning process development, research investigations, agent analysis, manuscript, visuals, and XLand concept. All authors shaped the final manuscript. Acknowledgements ---------------- We would like to thank Simon Osindero, Guy Lever, and Oriol Vinyals for reviewing the manuscript, Satinder Singh and Koray Kavukcuoglu for support of the project, and Marcus Wainwright and Tom Hudson for additional environment art and support. We also thank the wider DeepMind research, engineering, and environment teams for the technical and intellectual infrastructure upon which this work is built. \nobibliography \*
16c3fbfc-d579-4922-9447-f6d6076a85b6
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
How can I work on helping AI alignment researchers be more effective, e.g. as a coach? If you want to get good at coaching and otherwise helping researchers be more effective and productive, it helps to learn about topics like productivity techniques, practical rationality techniques like [those taught by CFAR](https://www.lesswrong.com/s/KAv8z6oJCTxjR8vdR), and generally how minds work and how to interact with them helpfully (one example here is the [Multiagent Models of Mind](https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip) sequence of posts). To get experience, test your fit, and enter the field, one thing you can do is practice with friends and get their feedback (or practice on yourself), doing your best to genuinely help. You could offer some coaching services for free and apply for a small grant to do this. This would help you get contacts and practice, and bring yourself to people’s attention. When you become more skilled and better positioned, you could then switch to paid coaching or try to get additional funding so you can keep offering your services for free.
d6e8cbf5-d1d5-4887-8dcf-0678db884b12
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Announcing Timaeus ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nN7bHuHZYaWv9RDJL/r2hfjkg0aivtxcmt1m3t)[Timaeus](https://timaeus.co/) is a new AI safety research organization dedicated to making fundamental breakthroughs in technical AI alignment using deep ideas from mathematics and the sciences. Currently, we are working on [singular learning theory](https://www.lesswrong.com/s/czrXjvCLsqGepybHC) and [developmental interpretability](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability). Over time we expect to work on a broader research agenda, and to create [understanding-based evals](https://www.lesswrong.com/posts/uqAdqrvxqGqeBHjTP/towards-understanding-based-safety-evaluations#:~:text=Rather%20than%20evaluating%20a%20final,and%20why%20they%20got%20it.) informed by our research.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nN7bHuHZYaWv9RDJL/i4vhfj18dtvrxtijdc65)Let sleeping gods (not) lie.Activities ========== Our primary focus is research and research-training. For now, we're a remote-first organization. We collaborate primarily through online [seminars](https://metauni.org/slt/) and the [DevInterp Discord](https://discord.gg/pCf4UynKsc), with regular in-person meetings at workshops and conferences (see below). We're also investing time in academic outreach to increase the general capacity for work in technical AI alignment. Research -------- We believe singular learning theory, a mathematical subject founded by [Sumio Watanabe](https://watanabe-www.math.dis.titech.ac.jp/users/swatanab/), will lead to a better fundamental understanding of large-scale learning machines and the computational structures that they learn to represent. It has already given us concepts like the [learning coefficient](https://arxiv.org/abs/2308.12108) and [insights into phase transitions](https://arxiv.org/abs/2310.06301) in Bayesian learning. We expect significant advances in the theory to be possible, and that these advances can inform new tools for alignment. [Developmental interpretability](https://www.lesswrong.com/posts/TjaeCWvLZtEDAS5Ex/towards-developmental-interpretability) is an approach to understanding the emergence of structure in neural networks, which is informed by singular learning theory but also draws on mechanistic interpretability and ideas from statistical physics and [developmental biology](https://journals.biologists.com/dev/article/150/11/dev201280/312613/A-dynamical-systems-treatment-of-transcriptomic). The key idea is that *phase transitions organize learning* and that *detecting, locating, and understanding these transitions* could pave a road to evaluation tools that *prevent* the development of dangerous capabilities, values, and behaviors. We're engaged in a [research sprint](https://manifund.org/projects/scoping-developmental-interpretability-xg55b33wsfc) to test the assumptions of this approach.  We see these as two particularly promising research directions, and they are our focus for now. Like any ambitious research, they are not guaranteed to succeed, but there's plenty more water in the well. Broadly speaking, the research agenda of Timaeus is oriented towards solving problems in technical AI alignment using deep ideas from across many areas of mathematics and the sciences, with a "full stack" approach that integrates work from pure mathematics through to machine learning experiments. The outputs we have contributed to so far: * [Lau et al. (2023)](https://arxiv.org/abs/2308.12108) "Quantifying Degeneracy in Singular Models via the Learning Coefficient" * [Chen et al. (2023)](https://arxiv.org/abs/2310.06301) "Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition" * [Hoogland & Van Wingerden (2023)](https://www.lesswrong.com/posts/6g8cAftfQufLmFDYT/you-re-measuring-model-complexity-wrong), a distillation of Lau et al * [The DevInterp Repo & Python Package](https://github.com/timaeus-research/devinterp) * [The 2023 Primer on Singular Learning Theory and Alignment](https://www.youtube.com/watch?v=bFVGc2UKARc) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nN7bHuHZYaWv9RDJL/d45vlbzseicber1xdybu)Academic Outreach ----------------- AI safety remains bottlenecked on senior researchers and mentorship capacity. The young people already in the field will grow into these roles. However, given the scale and urgency of the problem, we think it is important to open inroads to academia and encourage established scientists to spend their time on AI safety.  Singular learning theory and developmental interpretability can serve as a natural bridge between the emerging discipline of AI alignment and existing disciplines of mathematics and science, including physics and biology. We plan to spend part of our time onboarding scientists into alignment via concrete projects in these areas. As part of these efforts, we're aiming to pilot a course at the University of Melbourne in late 2024 on "Mathematical Methods in AI Safety", similar in spirit to existing graduate courses on Mathematical Methods in Physics.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nN7bHuHZYaWv9RDJL/jc7ziq8qobgcfh8rc1bk)Conferences ----------- We're organizing conferences, retreats, hackathons, etc. focusing on singular learning theory and developmental interpretability. These have included and will include: * [Conference on singular learning theory and AI alignment](https://www.lesswrong.com/posts/HtxLbGvD7htCybLmZ/singularities-against-the-singularity-announcing-workshop-on) (Berkeley, June-July 2023) * [Retreat on singular learning theory](https://devinterp.com/events/2023-q3-amsterdam-retreat) (Amsterdam, September 2023) * [Hackathon on developmental interpretability](https://events.humanitix.com/hackathon-developmental-interpretability) (Melbourne, October 2023) * [Conference on developmental interpretability](https://www.lesswrong.com/posts/QpFiEbqMdhaLBPb7X/apply-now-for-the-devinterp-2023-fall-summit) (Oxford, November 2023) * [AI safety demo days](https://devinterp.com/participate/research-sprints) (Virtual, November 2023) open to independent and junior researchers to present their work. * We've been amassing a list of [project ideas](https://devinterp.com/projects) to inspire researchers[.](https://www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability#Overview_of_Sequence) Team ==== Core Team --------- * [**Daniel Murfet**](http://therisingsea.org/) (Research Director) is a mathematician at the University of Melbourne, an expert in singular learning theory, algebraic geometry, and mathematical logic. * [**Jesse Hoogland**](https://jessehoogland.com)(Executive Director) has a MSc. in theoretical physics from the University of Amsterdam and previously cofounded a health-tech startup before going all in on AI safety. He participated in MATS 3.0 & 3.1 under the supervision of Evan Hubinger and was a research assistant in David Krueger's lab. * [**Stan van Wingerden**](https://www.lesswrong.com/users/stan-van-wingerden?mention=user) (Operations & Finances) has a MSc. in theoretical physics from the University of Amsterdam and was previously the CTO of an algorithmic trading firm. * [**Alexander Gietelink Oldenziel**](https://www.lesswrong.com/users/alexander-gietelink-oldenziel?mention=user)(Strategy & Outreach) is a DPhil student at University College London with a background in mathematics. Research Assistants ------------------- We just concluded a round of hiring and are excited to bring on board several very talented Research Assistants (RAs), starting with * Sai Niranjan * [**George Wang**](https://www.georgeyw.com/) Friends and Collaborators ------------------------- Since beginning to plan Timaeus in June 2023 we have been engaging with a range of people, both within the field of AI alignment and in academia. Here are some of the people we are actively collaborating with: * [**Susan Wei**](https://www.suswei.com/) (Statistician, University of Melbourne) * [**Calin Lazaroiu**](https://scholar.google.com/citations?user=O3CEqUoAAAAJ&hl=en) (Mathematical Physicist, UNED-Madrid and IFIN-HH Bucharest) * [**Simon Pepin Lehaulleur**](https://simon-pepin.github.io/) (Mathematician, Postdoc at University of Amsterdam) * [**Tom Burns**](https://tfburns.com/) (Neuroscientist, Postdoc at ICERM, Brown University) * [**Edmund Lau**](https://edmundlth.github.io/) (PhD student, University of Melbourne) * Zhongtian Chen (PhD student, University of Melbourne) * Ben Gerraty (PhD student, University of Melbourne) * [**Matthew Farrugia-Roberts**](https://far.in.net/) (Research Assistant, David Krueger's AI Safety Lab, University of Cambridge) * [**Liam Carroll**](https://lemmykc.github.io/) (Independent AI safety researcher) * [**Rohan Hitchcock**](https://rohanhitchcock.com/) (PhD student, University of Melbourne) * [**Will Troiani**](https://williamtroiani.github.io/) (PhD student, University of Melbourne and University Sorbonne Paris Nord) * [**Jake Mendel**](https://www.linkedin.com/in/jake-m-b05289126/) (Independent AI safety researcher) * [**Zach Furman**](https://www.linkedin.com/in/zach-furman-4936a0a5) (Independent AI safety researcher) Inclusion on this list does not imply endorsement of Timaeus' views. Advisors -------- We're advised by [**Evan Hubinger**](https://www.alignmentforum.org/users/evhub) and [**David ("Davidad") Dalrymple**](https://www.alignmentforum.org/users/davidad).  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nN7bHuHZYaWv9RDJL/yp7rnqexov7eq7xlri2h)(DALL-E 3 still has a hard time with icosahedra.)FAQ === Where can I learn more, and contact you? ---------------------------------------- Learn more on the [Timaeus](https://timaeus.co/) webpage. You can email [Jesse Hoogland](mailto:jesse@timaeus.co). What about capabilities risk? ----------------------------- There is a risk that fundamental progress in either singular learning theory or developmental interpretability could contribute to further acceleration in AI capabilities in the medium term. We take this seriously and are seeking advice from other alignment researchers and organizations. By the end of our current research sprint we will have in place institutional forms to help us navigate this risk. Likewise, there is a risk that outreach which aims to involve more scientists in AI alignment work will also accelerate progress in AI capabilities. However, those of us in academia can already see that as the risks become more visible, scientists are starting to think about these problems on their own. So the question is not *whether* a broad range of scientists will become interested in alignment but *when* they will start to contribute and *what they work on.* It is part of Timaeus' mission to help scientists to responsibly contribute to technical AI alignment, while minimizing these risks. Are phase transitions really the key? ------------------------------------- The strongest critique of developmental interpretability we know is the following: while it is established that phase transitions exist in neural network training, it is not yet clear *how common* they are, and whether they make a *good target* for alignment. We think developmental interpretability is a good investment in a world where many of the important structures (e.g., circuits) in neural networks form in phase transitions. Figuring out whether we live in such a world is one of our top priorities. It's not trivial because even if transitions exist they may not necessarily be visible to naive probes. Our approach is to systematically advance the fundamental science of finding and classifying transitions, starting with smaller systems where transitions can be definitively shown to exist. How are you funded? ------------------- We're funded through a [$142k Manifund grant](https://manifund.org/projects/scoping-developmental-interpretability-xg55b33wsfc) led primarily by [Evan Hubinger](https://manifund.org//projects/scoping-developmental-interpretability-xg55b33wsfc?tab=comments#3fd4492b-9fad-0356-3a72-b9c9f56e462a), [Ryan Kidd](https://manifund.org//projects/scoping-developmental-interpretability-xg55b33wsfc?tab=comments#08b1f696-1dc1-60d4-5b74-475276d5e16c), [Rachel Weinberg](https://manifund.org//projects/scoping-developmental-interpretability-xg55b33wsfc?tab=comments#0498fe5e-fbb1-6237-d0f5-f8aba9cd968e), and [Marcus Abramovitch](https://manifund.org//projects/scoping-developmental-interpretability-xg55b33wsfc?tab=comments#55989ce8-aedf-c739-08af-a7166a4e27cc). We are fiscally sponsored by [Ashgro](https://ashgro.org/). We could put an additional $500k to work. (If you're interested in contributing, reach out to [Jesse Hoogland](mailto:jesse@timaeus.co).) If our research pans out as we anticipate, we'll be aiming to raise $1-5m in Q2 of 2024. "Timaeus"? How do I even pronounce that? ---------------------------------------- [Pronounce it however you want.](https://forvo.com/word/timaeus/#en) Timaeus is the eponymous character in the [dialogue](https://en.wikipedia.org/wiki/Timaeus_(dialogue)) where Plato introduces his theory of forms. The dialogue posits a correspondence between the elements that make up the world and the Platonic solids. That's wrong, but it contains the germ of the idea of the unreasonable effectiveness of mathematics in understanding the natural world.  We read the Timaeus dialogue with a spirit of hope, in the capacity of the human intellect to understand and solve wicked problems. The narrow gate to human flourishing is preceded by a narrow path.  We'll see you on that path. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/nN7bHuHZYaWv9RDJL/m0x7xkqz1dtcuse4kykk)
ae810357-5cd3-4017-a0a6-0f353b102057
trentmkelly/LessWrong-43k
LessWrong
You Need More Money Part 1 of the Inefficient Markets Sequence. [Epistemic Disclaimer: I am not rich yet. I feel like I'm just barely starting to understand this stuff and perhaps that is the best time to teach it: while I still remember what wasn't obvious, both to cultivate my comprehension, and to enrich the rationalists. For educational purposes only! Double-check anything I say. Read the comments. You have been warned! You are responsible for your own money. Do your due diligence and don't rely too much on random internet bloggers :) I do not know your financial situation. I am not your financial advisor.] Convergent Instrumental Rationality Sufficiently advanced intelligent agents, almost no matter their ultimate ends, will tend to pursue instrumental means such as self-preservation and resource acquisition. What of sufficiently advanced humans? The Outside Perspective Can you imagine a world you'd rather live in? Perhaps your life is comfortable, for the moment. But the world is truly awful right now. Children are dying as we speak, of hunger, or war, or disease. A cheap shot, but I'm sure you could come up with many other ills. The world was even worse, in many ways, in the recent past. It could become worse again, or better. Perhaps a lot better. If you truly have something to protect, what are your means? If, when pondering a question, you discover what something smarter than yourself would answer, perhaps you have found your answer also. What would a superior intelligence do in your shoes? What would a more advanced culture think of ours? You can't know all the answers to these questions, but we do know this much: acquire resources. "Need" Is a Relative Term. Distinguishing a "need" from a "want" is one of those elementary-school tasks that we all think is easy, at the time. It's a basic budgeting skill: buy what you need first, then buy what you want with what's left over. But upon deeper examination, it's not so simple. Do you need to see that doctor? Your
cfcf5abd-a02e-40d2-b4c7-f790538dfeb5
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Can there be an indescribable hellworld? .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Can there be an indescribable hellworld? What about an un-summarisable one? By hellworld, I mean a world of very low value according to our value scales - maybe one where large number of simulations are being tortured (aka [mind crimes](https://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-4.html)). A hellworld could look superficially positive, [if we don't dig too deep](https://www.lesswrong.com/posts/mSYR46GZZPMmX7q93/corrigible-but-misaligned-a-superintelligent-messiah). It could look [irresistibly positive](https://www.lesswrong.com/posts/nFv2buafNc9jSaxAH/siren-worlds-and-the-perils-of-over-optimised-search). Could it be bad in a way that we would find indescribable? It seems that it must be possible. The set of things that can be described to us is finite; the set of things that can be described to us without fundamentally changing our values is much smaller still. If a powerful AI was motivated to build a hellworld such that the hellish parts of it were too complex to be described to us, it would seem that it could. There is no reason to suspect that the set of indescribable worlds contains only good worlds. Can it always be summarised? ---------------------------- Let's change the setting a bit. We have a world W, and a powerful AI A that is giving us information about W. The A is aligned/friendly/corrigible or whatever we need to be. It's also trustworthy, in that it always speaks to us in a way that [increases our understanding](https://agentfoundations.org/item?id=1249). Then if W is an indescribable hellworld, can A summarise that fact for us? It seems that it can. In the very trivial sense, it can, by just telling us "it's an indescribable hellworld". But it seems it can do more than that, in a way that's philosophically interesting. A hellworld is ultimately a world that is against our values. However, our values are [underdefined and changeable](https://www.lesswrong.com/posts/KCg7NeKQ7MycXWpYd/our-values-are-underdefined-changeable-and-manipulable). So to have any chance of saying what these values are, we need to either extract key invariant values, [synthesise our contradictory values](https://www.lesswrong.com/posts/Y2LhX3925RodndwpC/resolving-human-values-completely-and-adequately) into some complete whole, or use some extrapolation procedure (eg [CEV](https://intelligence.org/files/CEV.pdf)). In any case, there is a procedure for establishing our values (or else the very concept of "hellworld" makes no sense). Now, it is possible that our values themselves may be indescribable to us now (especially in the case of extrapolations). But A can at least tell us that W is against our values, and provide some description as to the value it is against, and what part of the procedure ended up giving us that value. This does give us some partial understanding of why the hellworld is bad - a useful summary, if you want. On a more meta level, imagine the contrary - that W was hellworld, but the superintelligent agent A could not indicate what human values it actually violated, even approximately. Since our values are not some exnihilio thing floating in space, but derived from us, it is hard to see how something could be against our values in a way that could never be summarised to us. That seems almost definitionally impossible: if the violation of our values can never be summarised, even at the meta level, how can it be a violation of our values? Trustworthy debate is FAI complete ---------------------------------- It seems that the consequence of that is that we can avoid hellworlds (and, presumably, aim for heaven) by having a corrigible and trustworthy AI that engages in [debate](https://arxiv.org/abs/1805.00899) or is a [devil's advocate](https://www.lesswrong.com/posts/5bd75cc58225bf0670375454/like-this-world-but). Now, I'm very sceptical of getting corrigible or trustworthy AIs in general, but it seems that if we can, we've probably solved the FAI problem. Note that even in the absence of a single given way of formalising our values, the AI could list the plausible formalisations for which W was or wasn't a hellworld.
9b3965a5-b56c-43fc-b3d5-d3a967b46b72
trentmkelly/LessWrong-43k
LessWrong
December Monthly Bragging Thread As in Joshua Blaine's original description (below), but may be used to brag about things you've accomplished either this month (December) or the previous one (November), assuming that you haven't brought it up in any earlier Monthly Bragging Thread. > In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that. > > Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on.have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods. > > So, what's the coolest thing you've done this month?
359e511c-bc6a-41fb-95af-bec0a288b64d
StampyAI/alignment-research-dataset/lesswrong
LessWrong
What are the limits of superintelligence? This post is mostly concerned with a superintelligent AI performing recursive self-improvement, this analysis is done to help make sense of the take off speed of such an operation. **Plausibility and Limits** Before considering upper limits, it may be worth considering whether general superintelligence is possible at all. It has been suggested that the idea of recursive self-improvement is similar to the infamous concept of a "perpetual motion machine". We know that a p.m.m. is impossible because it violates thermodynamics. Is there an analogous proof or argument that shows recursive self-improvement impossible? Some good places to start looking for hard limits on superintelligence are mathematics, computability and physics. It's also useful to think about this in terms of biology: **Biology** One example of incremental intelligence being improved upon is evolution: evolution has produced humans from apes. It is possible to simulate evolution in a computer, but evolution takes a very long time. This gives at least a kind of weak existence argument that improving intelligence is possible - and in fact doing so did not require any intelligence (evolution is "dumb"). If a blind process like evolution can do it, it seems likely that a specific problem solving algorithm aimed at doing so could pull it off more efficiently. Is it possible for something more intelligent than humans to exist? I don't think there is any reason why humans would be at the ultimate limit of intelligence. Computers are far superior at some extremely specific tasks. It's realistic to imagine something basically human level but with computational excellence at a long list of tasks like arithmetic, exact recall of TBs of data, etc. You can also imagine that if it is possible to simulate a human mind and body in a computer, then running one at 64x speed up could be considered a super intelligence, or if that is intractable then running 65535 of them at a slightly slower speed in parallel may also be. **Mathematics** Do facts like Godel's incompleteness theorem or Alan Turing's Halting Problem or Rice's theorem tell us that an intelligence algorithm cannot study and improve itself? I would argue that these theorems do not imply that. The halting problem is telling us that there isn't a perfect algorithm that can decide every single possible instance of halting, for all inputs. This is a very strong restriction. If you weaken it to allow "don't know" answers, then you can create a sequence of increasingly precise halt detection programs - they will always say "I don't know" for infinitely many algorithms, but there are static analysis tools that can give answers in some cases. A self improving intelligence does not need to precisely answer yes or no in finite time to analysis questions of every possible algorithm: it just needs to be able to perform whatever analysis its needs to of the specific programs it is operating on. If it cannot deduce facts about them in a reasonable amount of time it may be able to prioritize an alternative approach instead of getting nerd-sniped into an infinitely deep program analysis puzzle. Similarly with Godel's theorem: it doesn't need to prove or disprove every theorem. it doesn't need to prove consistency of peano arithmetic. It probably doesn't need to mathematically prove anything to be honest. So I do not believe that this poses any limitation on recursive self improvement. **Computability and algorithmic complexity** The strong Church-Turing thesis states that any real world computation can be encoded into a Turing machine. Conversely it is believed that no physical hardware can execute any non-computable processes. So there is our first hard limit there, no AI would be able to perform hypercomputation. Another computabilty limit is algorithmic complexity: It is simply impossible to come up with algorithms that beat complexity limits. We don't know the complexity limits for ever algorithm task (in fact we know few) - but these are strong limits. Further, the 'best' algorithms may be terrible in small input ranges. Just lke the "galactic algorithms" that have great algorithmic complexity but horrible constants that them pointless for any real world calculations. And finding efficient algorithms for a given task is also an extremely difficult thing to do that may require intractible amounts of research and compute effort. So a superintelligence would not magically be able to compute everything it needed to as quick as possible: given a reasonable set of assumptions about its capabilities (assume it is able to understand scientific research and program better than any human) it would be able to pull all the state of the art best algorithms from the entire wealth of humanities published computer science research - it would be able to produce very well optimized implementations of these. This is where it would start from. It's likely that an AI system like this would reason in a very different way than humans do, so there is probably an enormous amount of "low hanging fruit" in CS research: this suggests to me that it would be able to very quickly improve upon a decent fraction of known work - after this it would struggle to find new algorithms and improvements. **Physics and computing hardware** There is a limit to how much non-singular mass can exist in one space, at some point it become a black hole. In a similar way, Bremermann's limit tells us the absolute limit to how much computation can occur in a physical volume of space. Our most powerful GPU clusters are not quite at this limit yet. There are also thermodynamic heat limits on computations that destroy information (Landauer's principle), this heat must be dissapated somehow. Alternatively, reversible computation can be taken advantage of to reduce the amount of heat generated. Quantum computation is one example of reversible computation. More realistically though compute power needs to exist, it needs to be fabricated and get plugged in before it can actually be used. Although a superintelligent AI may be very good at things like software vulnerability discovery and exploitation.. it feels very unrealistic to think that it would take over something like a TSMC chip fab and start pumping out custom hardware for itself. Perhaps it would be more likely to make money and just buy the hardware it wants. **Priors and glitched weights** If a superintelligence was born out of LLMs, then it is worth considering the architectural priors that have been baked into the very core of the system. There is a wealth of knowledge and deep structure inside the millions of books it has read but the entire universe that this creature has grown up in is 1-dimensional. Trying to successfully operate in the 4-dimensional spacetime we exist in may be a challenge. Furthermore it has some strong priors built into it like that the semantics of tokens are a 12288 dimensional euclidean space. That said, it is not particularly difficult for the model to relax or tweak these in code - but it is very computationally intensive to train a new model to run on top of that changed code. Something else to note is that the weights file is stored on a hard disk that may fail - large chunks of the weights may become glitched causing strange behavior in a model. Brain damage can cause pathological behavior in humans. **Take-off speed** Those points are relevant to estimating the speed that recursively self improving AI would happen at. I've previously implemented a simple text-generator LSTM along with pytorch code to train it in a pair programming session with Bard. If LLMs continue to improve in capabilities and accuracy over the next few years it wont be long before they are able to implement an attention based transformer along with a process for training them, and study the latest research on improvements and optimizations to attention. This new research is currently coming out weekly, so perhaps an superintelligent LLM would be able to come up with its own invented optimizations and improvements within a month too. Once it iterates an improvement upon itself, the next iterations may be quicker. All exponential processes restricted inside a finite space will level off. So maybe recursive self-improvement will level off after some number of iterations too. There is only finite compute available to it on earth, extra compute power needs to be physically fabricated and this is a significant bottleneck to an intelligence explosion - so is the amount of compute resource it has to invest into algorithmic improvements and optimizations. **Conclusion** I have seen a lot of people deny or downplay the possibility of foom, so I wanted to provide a strong plausibility argument for it here. And for it potentially happening within our lifetime if humanity does not choose to stop it from happening. I used to believe that a superintelligence would basically go to infinite intelligence within a second but I think the reasoning in this post brings that speed down significantly: The process of recursive self-improvement would be extremely laborious and time consuming. I expect it to occur far faster than human research does though. It would invest its own resources near-optimally into things like funding the creation of additional hardware and algorithmic self-improvements, all of these things would feed back into further efficiency improvements and better decision making.
8baab413-8796-442e-9b37-1632a604d01a
trentmkelly/LessWrong-43k
LessWrong
Tend to your clarity, not your confusion Repost from https://amoretlicentia.substack.com/ Modern life is weird. For the more privileged among us, the options of what we could do with our time grow exponentially by the year, by the day. My interests are infinite, and I’m lucky to have both the wits and the means to follow just about any of them. Meanwhile, I and many of my peers are too busy doomscrolling our lives away in ennui and overwhelm as that we’d find the time to seize a mere fraction of all the unprecedented opportunities we have. We suffer crippling decision paralysis. But slowly, I’m learning to deal with that. A heuristic I lately found immensely useful for not getting crushed by the radical uncertainty of all these possibilities is: > Tend to your clarity, not your confusion. Four examples: I. I’ve grown sick of Berlin. The constant noise. The rainbow palette of smells one more disgusting than the other. The gunky subway seats better not to take a close look at. The gray. The dark. The legions of homeless one looking more woeful than the other because how couldn’t they when being more miserable than the other guy is what gives them a competitive edge. The techno parties sex parties drug parties that you need to buy your way into if you still want to feel something after having numbed your senses enough to bear the rest of city life. I have enough of that. Following that conviction, I’ve spent the days and nights of my past two weeks frantically researching careers closer to nature. Vegetable gardening, environmental education, or just doing what I’m already doing in a more rural area. And, I did all of that crumpled up inside a tiny city apartment, procrastinating the liberation of getting the fuck out just a little bit longer. Of course, this research wasn’t completely useless. But a quarter as much googling would have sufficed just fine. I followed the confusion and dissatisfaction of not being in nature, and what I got was more confusion and dissatisfaction in trying to map o
848dd895-a21e-494f-aabf-33752bf3b135
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] AlphaFold: a solution to a 50-year-old grand challenge in biology To my eyes, this looks like the sort of useful advance in applying AI that doesn't really improve capabilities, and thus is just net positive, even judged by a safety mindset. But I'm curious to know if other people think differently. (Note that it might be easier to discuss this after the paper is posted online)
0aa0f108-444b-419d-8806-1c439e0e91db
StampyAI/alignment-research-dataset/blogs
Blogs
Announcing the 2010 Singularity Research Challenge Offering unusually good philanthropic returns –€” meaning greater odds of a positive Singularity and lesser odds of human extinction — the Machine Intelligence Research Institute has launched a new challenge campaign. The sponsors, Edwin Evans, Rolf Nelson, Henrik Jonsson, Jason Joachim, and Robert Lecnik, have generously put up $100,000 of matching funds, so that every donation you make until February 28th will be matched dollar for dollar. If the campaign is successful, it will raise a full $200,000 to fund MIRI’s 2010 activities. For almost a decade, the Machine Intelligence Research Institute has been asking questions on the future of human civilization: How can we benefit from increasingly powerful technology without succumbing to the risks, up to and including human extinction? What is the best way to handle artificial general intelligence (AGI): programs as smart as humans, or smarter? Among MIRI’s core aims is to continue studying “Friendly AI”: AI that acts benevolently because it holds goals aligned with human values. This involves drawing on and contributing to fields like decision theory, computer science, cognitive and moral psychology, and technology forecasting. Creating AI, especially the Friendly kind, is a difficult undertaking. We’re in it for as long as it takes, but we’ve been doing more than laying the groundwork for Friendly AI. We’ve been raising the profile of AI risk and Singularity issues in academia and elsewhere, forming communities around enhancing human rationality, and researching other avenues that promise to reduce the most severe risks the most effectively. If you make a donation to the Machine Intelligence Research Institute, you can choose which grant proposal your donation should help to fill. Any time a grant proposal is fully funded, it goes into our “€œactive projects”€ file: it becomes a project that we have money enough to fund, and that we are publicly committed to funding. (Some of the projects will go forward even without earmarked donations, with money from the general fund –€” but many won’€™t, and since our work is limited by how much money we have available to support skilled staff and Visiting Fellows, more money allows more total projects to go forward.) Donate now, and seize a better than usual chance to move our work forward. The post [Announcing the 2010 Singularity Research Challenge](https://intelligence.org/2009/12/23/announcing-the-2010-singularity-research-challenge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
65df950c-20b3-4a41-94b2-839b5a052443
StampyAI/alignment-research-dataset/blogs
Blogs
January 2015 Newsletter | | | | --- | --- | | | | | --- | | [Machine Intelligence Research Institute](http://intelligence.org) | | | | | | | --- | --- | | | | | --- | | Thanks to the generosity of 80+ donors, we completed [our winter 2014 matching challenge](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/), raising $200,000 for our research program. Many, many thanks to all who contributed! **Research Updates** * Our major project of the past five months has been a new overview of our technical research agenda, plus six supporting papers which cover each research area in more detail. The overview report is [now available](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/), and so far we’ve released two of the supporting papers, on [corrigibility](https://intelligence.org/files/Corrigibility.pdf) and [decision theory](https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf). * Two more reports and one paper: “[Computable probability distributions which converge…](https://intelligence.org/2014/12/16/new-report-computable-probability-distributions-converge/)“, “[Tiling agents in causal graphs](https://intelligence.org/2014/12/16/new-report-tiling-agents-causal-graphs/),” and “[Concept learning for safe autonomous AI](https://intelligence.org/2014/12/05/new-paper-concept-learning-safe-autonomous-ai/).” * A MIRI technical report from 2013, “Responses to catastrophic AGI risk: a survey,” has now been [published in *Physica Scripta*](http://iopscience.iop.org/1402-4896/90/1/018001/article). **News Updates** * Luke wrote a short analysis for the World Economic Forum’s blog: “[Two mistakes about the threat from artificial intelligence](https://agenda.weforum.org/2014/12/two-mistakes-about-the-threat-from-artificial-intelligence/).” * Our *Superintelligence* online reading group is in its 16th week, discussing [Tool AIs](http://lesswrong.com/lw/l9p/superintelligence_16_tool_ais/). **Other Updates** * Eric Horvitz has [provided initial funding](http://www.nytimes.com/2014/12/16/science/century-long-study-will-examine-effects-of-artificial-intelligence.html?_r=0) for a 100-year Stanford program to study the social impacts of artificial intelligence. The [white paper](https://stanford.app.box.com/s/266hrhww2l3gjoy9euar) lists 18 example research areas, two of which amount to what Nick Bostrom [calls](http://smile.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/) the superintelligence control problem, MIRI’s research focus. No word yet on how soon anyone funded through this program will study open questions relevant to superintelligence control. As always, please don’t hesitate to let us know if you have any questions or comments. Best, Luke Muehlhauser Executive Director | | | | |   The post [January 2015 Newsletter](https://intelligence.org/2015/01/01/january-2015-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
765a866b-e6b6-4944-be20-8368df1b642d
trentmkelly/LessWrong-43k
LessWrong
I finally got ChatGPT to sound like me I publish lots of words on the Internet. These words are fed into LLMs such as ChatGPT. I have published enough training data that the world's most powerful AIs have a simulation of me inside of them. I have basically uploaded a copy of myself into the cloud. The AI simulation of me gets smarter every time OpenAI and its competitors upgrade their AI. Cool, huh? Who doesn't want a simulation of themselves that thinks way faster and is networked deep into the world's most powerful superintelligences? Ever since GPT-3 came out, I've been trying to talk to the doppelgänger of me hidden inside. There are a few problems. First of all, GPT needs to know who I am with high fidelity. At first it just generated a stereotypical LessWrong blogger. That wasn't interesting to me. I didn't want a clone of a stereotype of my demographic group. I want a clone of me, with all my quirks and weirdness. For this to work, GPT would have to know who I am i.e. where I exist in its vector space representation of the world. Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website. > What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They’re willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights. > > ⋮ > > In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system. Now that ChatGPT
9c16c26d-1cff-4e95-af28-8784916f7671
trentmkelly/LessWrong-43k
LessWrong
I'm scared. Recently, I've been ratcheting up my probability estimate of some of Less Wrong's core doctrines (shut up and multiply, beliefs require evidence, brains are not a reliable guide as to whether brains are malfunctioning, the Universe has no fail-safe mechanisms) from "Hmm, this is an intriguing idea" to somewhere in the neighborhood of "This is most likely correct." This leaves me confused and concerned and afraid. There are two things in particular that are bothering me. On the one hand, I feel obligated to try much harder to identify my real goals and then to do what it takes to actually achieve them -- I have much less faith that just being a nice, thoughtful, hard-working person will result in me having a pleasant life, let alone in me fulfilling anything like my full potential to help others and/or produce great art. On the other hand, I feel a deep sense of pessimism -- I have much less faith that even making an intense, rational effort to succeed will make much of a difference. Rationality has stripped me of some of my traditional sources of confidence that everything will work out OK, but it hasn't provided any new ones -- there is no formula that I can recite to myself to say "Well, as long as I do this, then everything will be fine." Most likely, it won't be fine; but it isn't hopeless, either; possibly there's something I can do to help, and if so I really want to find it. This is frustrating. This isn't to say that I want to back away from rationalism -- it's not as if pretending to be dumb will help. To whatever extent I become more rational and thus more successful, that's better than nothing. The concern is that it may not ever be better enough for me to register a sense of approval or contentedness. Civilization might collapse; I might get hit by a bus; or I might just claw through some of my biases but not others, make poor choices, and fail to accomplish much of anything. Has anyone else had experience with a similar type of fear? Does anyone have
c0aae6bc-3a9c-44e0-b575-870a66e62c6d
trentmkelly/LessWrong-43k
LessWrong
Look for Lone Correct Contrarians Related to: The Correct Contrarian Cluster, The General Factor of Correctness (Content note: Explicitly about spreading rationalist memes, increasing the size of the rationalist movement, and proselytizing. I also regularly use the word 'we' to refer to the rationalist community/subculture. You might prefer not to read this if you don't like that sort of thing and/or you don't think I'm qualified to write about that sort of thing and/or you're not interested in providing constructive criticism.) I've tried to introduce a number of people to this culture and the ideas within it, but it takes some finesse to get a random individual from the world population to keep thinking about these things and apply them. My personal efforts have been very hit-or-miss. Others have told me that they've been more successful. But I think there are many people that share my experience. This is unfortunate: we want people to be more rational and we want more rational people. At any rate, this is not about the art of raising the sanity waterline, but the more general task of spreading rationalist memes. Some people naturally arrive at these ideas, but they usually have to find them through other people first. This is really about all of the people in the world who are like you probably were before you found this culture; the people who would care about it, and invest in it, as it is right now, if only they knew it existed. I'm going to be vague for the sake of anonymity, but here it goes: I was reading a book review on Amazon, and I really liked it. The writer felt like a kindred spirit. I immediately saw that they were capable of coming to non-obvious conclusions, so I kept reading. Then I checked their review history in the hope that I would find other good books and reviews. And it was very strange. They did a bunch of stuff that very few humans do. They realized that nuclear power has risks but that the benefits heavily outweigh the risks given the appropriate alternative, and
640e9b82-237c-4075-9e25-dd1014e276c9
trentmkelly/LessWrong-43k
LessWrong
A new definition of "optimizer" Here, I provide a new definition of "optimizer", as well as better explain a previous one I gave. I had previously posted a definition, but I think the definition I gave was somewhat wrong. It also didn't help that I accidentally wrote my previous definition incorrectly, and only days later realized I had written out the definition mistakenly (and then the modified definition contained a new mistake.) The way I presented my definition in the last article was also overly confusing, so I'll try to explain it better. First, I'll present a someone else's proposed definition of "optimizer" and explain how my definitions are intended to improve upon is. Then, I'll summarize the definition I (meant to) give in my previous post, point out some potential problems, and then outline changes that could fix them. However, these modification make the definition more complicated, so I then I provide a new, though informal, definition of optimizer that eloquently avoid the issues with the original definition. Then, I compare my definition to a previous proposed definition of "optimizer" and explain how my definition is intended to improve upon it. Afterwards, I explain why defining "optimizer" doesn't matter much, anyways. Alex Flint's definition The article, "The ground of optimization" defines "optimizer", saying: > An optimizing system is a system that has a tendency to evolve towards one of a set of configurations that we will call the target configuration set, when started from any configuration within a larger set of configurations, which we call the basin of attraction, and continues to exhibit this tendency with respect to the same target configuration set despite perturbations. Though this definition has some value, it is extremely broad and includes things that wouldn't be normally considered optimizers. For example, it considers toasters to be optimizers. Specifically, consider a toaster left unattended toasting a piece of bread. No matter how you reconfigure the to
46e33305-9ecd-43db-aa71-931d7a3578d3
trentmkelly/LessWrong-43k
LessWrong
Meetup : Rationality Reading Group (57-61) Discussion article for the meetup : Rationality Reading Group (57-61) WHEN: 07 July 2015 06:30:00PM (-0700) WHERE: Paul G. Allen Center (185 Stevens Way, Seattle, WA) Room 503 Reading group for Yudkowsky's "Rationality: AI to Zombies", which is basically an organized and updated version of the Sequences from LW (see http://wiki.lesswrong.com/wiki/Sequences). The group meets to discuss the topics in the book, how to apply and benefit from them, and related topics in areas like cognitive biases, applied rationality, and effective altruism. You can get a copy of the book here: https://intelligence.org/rationality-ai-zombies/ The reading list for this week is six topics from the "How to actually change your mind" section. These are the same as from last week's meetup, as that didn't really happen. They are (actually 57-61, LW's auto-formatting is screwing it up): 1. Politics is the Mind-Killer 2. Policy Debates Should Not Appear One-Sided 3. The Scales of Justice, the Notebook of Rationality 4. Correspondence Bias 5. Are Your Enemies Innately Evil? We previously covered the "Map and territory" sequence a few months ago, but please don't feel a need to have read everything up to this point to participate in the group. Event is also on Facebook: https://www.facebook.com/events/1685501021668755/ We're meeting on the 5th floor. If you show up and the door into the room is locked, knock and look around for us elsewhere on the fifth floor if nobody answers. If the doors to the building are locked, try the other ones and don't believe the little red lights; try anyway. If the doors are, in fact, locked, we'll try to have somebody to let people in. There's usually snacks at the meetup, though feel free to bring something. We usually get dinner afterward, around 9PM or so. Discussion article for the meetup : Rationality Reading Group (57-61)
147faeb0-e524-4221-a41a-6de0a7264b2d
trentmkelly/LessWrong-43k
LessWrong
Babble challenge: 50 ways of solving a problem in your life Back again. Let’s become stronger. This week’s challenge:  Years ago you found yourself hurled into existence, facing a vast universe with a mind capable of the Art of Rationality, reading a LessWrong post at this very moment.  Yet in your life there is a particular problem. I don’t know what it is. Maybe your chair is uncomfortable; you’re not getting as high scores as you want at the Math Olympiad; or you’ve got insomnia.  Whatever it is, pick one specific problem in your life.  Find a way to solve it.  You have 1 hour to come up with 50 ways.  (But no need to implement the solutions within 1 hour!) Looking back Here are the champions who made it to 50 last week, with stars indicating their streak:  ★★★ Slider, gjm, Harmless, jacobjacob, Tetraspace Grouping ★★ athom, johnswentworth, ryan_b, Ericf, Bucky, Mark Xu, CptDrMoreno, Yonge ★ TurnTrout, Tighe, knite Why measure streaks?  Last week Bucky commented:  > I don't like measuring things by streaks - if you want to do a list I think doing it by total number of challenges completed is better. Streaks are a less accurate indication of effort put in or potential gains achieved and have more potential to create unhealthy incentives. But I disagree. I replied: > One of the goals of the challenge is building a culture of practice. I think consistency is an incredibly important part of that. That's how you get compound returns. A portfolio that grows 7% every year will grow ~30x over fifty years. But a portfolio that grows that much only every other year will only grow about ~5x. (Even though the first one only put in "twice as much effort".) Moving forwards I’m now entering week 4 out of the 7-week babble streak I committed to. If you want more regularity in practicing your creativity, feel free to post a comment committing to also going all the way to 7.  This week we’re trying something new: applied babble. I haven’t tried it before, so am very curious to see what will happen. Feel free to add a no
6a2f3742-5e5f-4920-944d-14cac27ceb0a
trentmkelly/LessWrong-43k
LessWrong
Meetup : Amsterdam - Meetup #12 - Friendly AI Discussion article for the meetup : Amsterdam - Meetup #12 - Friendly AI WHEN: 19 February 2017 03:13:37PM (+0100) WHERE: Meester Treublaan 18, 1097 DP Amsterdam, Netherlands AI timelines have shortened. This meetup is dedicated to brainstorming ways to save, well, our lives. Discussion article for the meetup : Amsterdam - Meetup #12 - Friendly AI
5de758ff-7ec2-408a-bb1f-9d15232d4645
trentmkelly/LessWrong-43k
LessWrong
Good Heart Donation Lottery Winner The Good Heart Donation Lottery is closed. And the winner is... 0 [ggworley (master) ~/lottery](1649449423 2022-04-08 13:23:43) $ cat lottery.rb puts Hash[ "EA Funds - Long-Term Future Fund" => 21, "Against Malaria Foundation" => 18, "EA Funds - Animal Welfare Fund" => 13, "GiveWell Maximum Impact Fund" => 11, "EA Funds - EA Infrastructure Fund" => 11, "Machine Intelligence Research Institute" => 11, "EA Funds - Global Health and Development Fund" => 9, "EA Funds - Even distribution" => 9, "FTX Future Fund" => 5, "Planned Parenthood" => 1 ].collect { |k,v| [k] * v }.flatten.sample(1) 0 [ggworley (master) ~/lottery](1649449430 2022-04-08 13:23:50) $ ruby lottery.rb EA Funds - EA Infrastructure Fund The EA Infrastructure Fund! I've updated my payment info in LessWrong so that my GHT will be cashed out to the EA Infrastructure Fun. I believe the payout should be $837 based on what I'm seeing, which means I'll be making a donation of $163 to the EA Infrastructure Fun to round up to the $1000 pool minimum I promised. I'll make payment shortly after I'm sure of the amount once LessWrong pays out on their side. Thanks to everyone who voted and suggested charities as part of this fun extension to this year's April Fool's joke.
c753d41c-1dcc-461f-a702-290e67d8cb69
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: What If Discussion article for the meetup : Washington, D.C.: What If WHEN: 06 September 2015 03:00:00PM (-0400) WHERE: National Portrait Gallery Crossposted from mailing list. Meeting in courtyard, gathering from 3:00pm, hard start 3:30pm - until closing. In the spirit of XKCD creator Randall Munroe's What If, we'll be nerd sniping each other with ridiculous hypotheticals. Come prepared! Upcoming Meetups: * Sep 13: Sorting Hat * Sep 20: Fun and Games * Sep 27: Singing (weather permitting Discussion article for the meetup : Washington, D.C.: What If
b7de9dff-7ddb-4645-a2a3-850df553e74e
StampyAI/alignment-research-dataset/blogs
Blogs
Machine Intelligence Research Institute Progress Report, December 2011 “I think the Machine Intelligence Research Institute has some very smart people working on the most important mission on Earth, but… what exactly are they *doing* these days? I’m in the dark.” There’s a good reason I hear this comment so often. We haven’t done a good job of communicating our progress to our supporters. Since being appointed Executive Director of the Machine Intelligence Research Institute (SI) in November, I’ve been working to change that. I gave [two](http://lesswrong.com/r/discussion/lw/8s6/video_qa_with_singularity_institute_executive/) [Q&As](http://lesswrong.com/lw/980/singularity_institute_executive_director_qa_2/) about SI and explained our research program with a [list of open problems in AI risk research](http://lukeprog.com/SaveTheWorld.html). Now, I’d like to introduce our latest effort in transparency: **monthly progress reports**. We begin with last month: December 2011. What did we do in December 2011? (From this point on I’ll refer to myself as “Luke,” for clarity.) * **Winter fundraiser**. We launched our winter fundraiser and have been contacting our supporters. The fundraiser has raised over $40k so far, though we still have $60k to go! (So, please [donate](http://intelligence.org/donate)!) * **Singularity Summit 2012**. Our chief operating officer, Amy Willey, worked all month on preparations for Singularity Summit 2012, with much help from Luke. As a result we have now chosen a team of professionals with which we will take the Summit to “the next level,” and we’ve already confirmed several major speakers: Ray Kurzweil, Steven Pinker, Tyler Cowen, Temple Grandin, Peter Norvig, Robin Hanson, Peter Thiel, Melanie Mitchell, Vernor Vinge, and Carl Zimmer. We have also opened negotiations with many other speakers. This is a big improvement over our preparations for Singularity Summit 2011, which effectively began in May 2011, leaving us little time to capture certain speakers and develop certain kinds of media coverage. This much progress at such an early stage, in addition to a larger budget and greater professional assistance, will allow Singularity Summit 2012 to be a major leap forward for the event. Amy has also been developing arrangements for a *possible* European Singularity Summit in 2012. * **Rationality Org**. As explained in our [strategic plan](https://intelligence.org/wp-content/uploads/strategicplan2011.pdf), we recognize the branding confusion produced by focusing on both AI risk research *and* rationality education, so we are preparing to spin off a separate rationality education organization so that the Machine Intelligence Research Institute can focus on AI risk research. Internally, we are calling the rationality education organization “Rationality Org.” Anna and Eliezer, with some help from Luke, did a lot of work developing plans for the future Rationality Org. We spent even *more* time developing the core rationality lessons, testing versions of them on different groups of people, and iterating the content. We expect the Rationality Org to launch late this year or early next year, and we expect it to not only [raise the sanity waterline](http://lesswrong.com/lw/1e/raising_the_sanity_waterline/) but also bring significant funding toward existential risk reduction. * **New website design**. Our media director, Michael Anissimov, with much help from Luke, worked out the strategy and design of SI’s new website and worked with a designer to iterate the design several times. The designer is now programming the site. * **New donor database**. In December, our Director of Development, Louie Helm, finished setting up our new donor database, including the custom code for automatically importing data from Paypal, Google Checkout, etc. This database gives us a much better view of who our supporters are, and allows us to more effectively thank them for their support. Anissimov wrote personal thank-you notes to hundreds of past donors. * **Research articles**. Luke and Anna made continued progress on their overview article “Intelligence Explosion: Evidence and Import.” Carl continued work with [FHI](http://www.fhi.ox.ac.uk/)‘s Stuart Armstrong on their article “Arms Races and Intelligence Explosions,” and continued work with Nick Bostrom on their article “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects.” * **Other articles**. Luke wrote a few articles for Less Wrong: [Hack Away at the Edges](http://lesswrong.com/lw/8ns/hack_away_at_the_edges/), [Why study the cognitive science of concepts](http://lesswrong.com/r/discussion/lw/8oy/why_study_the_cognitive_science_of_concepts/), and [So You Want to Save the World](http://lesswrong.com/lw/91c/so_you_want_to_save_the_world/). Eliezer made lots of progress on his new Bayes Theorem tutorial, including (outsourced) illustrations and much audience testing. * **Eliezer’s book**. Eliezer finished the book proposal for his first book (already mostly written), *The Science of Changing Your Mind*. We have begun looking for good agents to represent the book. * **Facing the Singularity**. Luke continued to develop his online book *Facing the Singularity*, a layman’s introduction to the Singularity, its consequences, and what we can do about it. The chapters he wrote in December 2011 were: [The Crazy Robot’s Rebellion](http://facingthesingularity.com/2011/the-crazy-robots-rebellion/), [Not Built to Think About AI](http://facingthesingularity.com/2011/not-built-to-think-about-ai/), [Playing Taboo with “Intelligence”](http://facingthesingularity.com/2011/playing-taboo-with-intelligence/), [Superstition in Retreat](http://facingthesingularity.com/2011/superstition-in-retreat/), [Plenty of Room Above Us](http://facingthesingularity.com/2011/plenty-of-room-above-us/), and [Don’t Flinch Away](http://facingthesingularity.com/2011/dont-flinch-away/). * **Additional transparency efforts**. Anissimov and Luke began work on the design and content for an annual report. They also shot and produced [Luke’s video Q&A #1](http://lesswrong.com/r/discussion/lw/8s6/video_qa_with_singularity_institute_executive/). * **Optimal philanthropy**. The optimal philanthropy movement (e.g. [Giving What We Can](http://www.givingwhatwecan.org/)) is growing exponentially. Carl and Anna did much collaboration and research with other members of the movement. Partly due to their work, the optimal philanthropy movement has great awareness of the case for [existential risk reduction as optimal philanthropy](http://www.existential-risk.org/concept.pdf), which should bring significant funding for existential risk reduction work in the coming years. * **Meetings with advisors, supporters, and potential researchers**. During December 2011, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, along with other topics. We also met with several potential researchers to gauge their interest and abilities. * **Google Adwords upgrade**. For months, Louie and others have been tweaking the ads we get from $10k/month of Google Adwords donated to us by Google. By December 2011, our ads were so successful that we qualified for an upgrade, and are now receiving $40k/month of free advertising via Google Adwords. * **Better financial management**. In December 2011 we began to train our new treasurer, long-time donor and friend of SI, Jesse Liptrap. This means that someone outside the organization is keeping a close watch on our finances. We also began work on improving our book-keeping and accounting practices, which will allow better budgeting, forecasting, and resource management. * **Unpublished research**. As with most research institutes, most of our research does not end up in a published paper for 1-3 years, if ever, even though it informs our views on many things. Unpublished research in December 2011 included research on population ethics, brain-computer interfaces, optimal philanthropy, [technological forecasting](http://lesswrong.com/lw/9ao/longterm_technological_forecasting/), nuclear extinction risks, AI architectures, anthropics, decision theories, rationality training, Oracle AI, science productivity,  and more. SI’s research associates contributed to some of this research, including the Less Wrong discussion post [A model of UDT with a halting oracle](http://lesswrong.com/r/discussion/lw/8wc/decision_theory_with_halting_oracles/). * **New board member**. [Quixey](http://www.quixey.com/) co-founder and CEO, [Tomer Kagan](http://www.forbes.com/pictures/lmf45kde/tomer-kagan-co-founder-and-ceo-quixey-28/), was added to SI’s board of directors. Tomer is a good friend and brings a wealth of business and management experience to our team. * **Much more**. Of course, we worked on dozens of other, smaller projects. These include: updates to [IntelligenceExplosion.com](http://intelligenceexplosion.com/); development of contacts for Rationality Org; the organization of regular SI staff dinners, to promote coordination and friendship; speaking with donors at Peter Thiel’s “Fast Forward” party; development of a database of helpful volunteers and assistants; implementing [Olark](http://www.olark.com/) on our [donate page](http://intelligence.org/donate/); meetings with reporters from various media organizations; uploading old videos to Vimeo and YouTube; fixing errors and outdated content on our website; finishing our 2010 990 and sent it to Brandon Reinhart to add to his [financial examination](http://lesswrong.com/lw/5il/siai_an_examination/) of the Machine Intelligence Research Institute, preparing a [new template for SI research publications](http://commonsenseatheism.com/wp-content/uploads/2012/01/Dewey-Learning-what-to-value.pdf) (courtesy of research associate Daniel Dewey); and much more. The post [Machine Intelligence Research Institute Progress Report, December 2011](https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
cdf8d202-7f5f-4f30-8635-0b2172a4d166
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Universality and the “Filter” *This post was written under Evan Hubinger’s direct guidance and mentorship, as a part of the* [*Stanford Existential Risks Institute ML Alignment Theory Scholars (MATS) program*](https://www.lesswrong.com/posts/FpokmCnbP3CEZ5h4t/ml-alignment-theory-program-under-evan-hubinger)*.* **TL;DR:** In this post, I refine universality, in order to explain the "Filter". The Filter is a method of risk analysis for the outputs of powerful AI systems. I refine universality by replacing the language of “beliefs” and proposing a handful of general definitions in part I. Then, I explain the key application of universality of building the Filter. I explain the Filter non-technically through a parable in part II, then technically in part III. To understand the Filter as presented in part II, it is not necessary to comprehend universality to the rigor that I present in part I; it is only necessary to see that all models are wrong1. While this post builds technically off a previous post2, part II is meant to be understandable to all audiences.  I. Refining Universality ========================  Universality is discussed [here](https://ai-alignment.com/towards-formalizing-universality-409ab893a456), [here](https://www.lesswrong.com/posts/farherQcqFQXqRcvv/universality-unwrapped), and [here](https://www.lesswrong.com/posts/R5Euq7gZgobJi5S25/nuances-with-ascription-universality). Here are some features of universality that have been missing from its presentation. 1. The need to replace “beliefs” -------------------------------- A “belief” of a system is used 60 times in Paul Christiano’s [attempt to formalize universality](https://ai-alignment.com/towards-formalizing-universality-409ab893a456). I think a “belief” is not the right language for an object of a system for two reasons. First, it conflates terms worth making distinct. When “beliefs” are used in Paul Christiano’s [attempt to formalize universality](https://ai-alignment.com/towards-formalizing-universality-409ab893a456), they don’t always refer to the same thing. Second, “belief” should be reserved for a different concept. The way humans think of “belief” is not how it’s used there. So, I rewrote the formalization of universality to not include the word “belief”. The handful of terms presented below combine to replace the previous notion of “belief”. Presenting universality without the language of “beliefs”, all said, is certainly more cumbersome.  I hope this reframing will help improve the methodology for applying universality to HCH. I apply the general definitions below to universality and the Filter in section III.2. 2. General definitions for universality --------------------------------------- Consider κ.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , a list of action-shaping intelligible information. κ={κj}, where κj are pieces of intelligible information. Pieces of intelligible information are facts, which are sentences  (i.e. “The sky is blue”, “there are infinitely many twin primes”). Another term, X, is the output of a computation.  X is a list of actions, X={xl} (i.e. Printed to my computer the sentence “the sky is blue”, wiped the memory of Alice’s computer next door). A view may be modeled One may view a computation A as, upon an input of the world γ, taking a set of actions X. A(γ)=X. κ is a part of γ; action-shaping intelligible information of a system is part of the input of the world which determines the actions the system will take.  More formally: ![](https://lh6.googleusercontent.com/Dx7xreEFGSbt4W_glktn_t2L5Skqqr-i1ix4Eu5EYgiLMRF5N6qbWuMlJ6BYPeIw7EBfU4tN1fUNLXcJyCEdY2KW9tcUuFFJ-dkj8pxEaSkh4bFP4OBDFaDJu11ih8s_pkcwrXjx)Now, what was previously known as “belief” is now usually denoted as an action-shaping list of intelligible information, κ={κj}Jj=j0. In item II (above), a formal constraint of causality requires some causal theory, and also importing some causal assumptions. I do not provide this here. A causal constraint is necessary in addition to the statement A(κ)=xl to preclude the inclusion of arbitrary information in κ which are not related to the actions xl and which, without this constraint, are permitted to be very numerous. I expect that item III (above) is closer to a common rule of thumb than to a universal truth. Some theories of causality between κ and xl might afford that in special circumstances, κ is unique (N=1).  Then, I may present universality with a couple formulas: ![](https://lh3.googleusercontent.com/sPnDFX5neFitGmT58G_mhFS4g6W9cVH2I7MsYljFCwOzjaf3W7qny6izwEfBFhsthOny1pjplDAvECAd4Uz_lK33xfw6TrC6wLkYtl7Z6-VRQ9F3PfaYTf-GDfDdBENuzYkXDt5X)Note that the first and third statements are directly transcribed from Paul Christiano’s [attempt to formalize universality](https://ai-alignment.com/towards-formalizing-universality-409ab893a456). The reason I’m working to make universality precise is that this precision is important to build the Filter. The Filter is a method for risk analysis on large HCH models. 3. Edit of Paul Christiano’s “*Towards Formalizing Universality”* with these definitions ---------------------------------------------------------------------------------------- In light of the statements above, in [this appendix](https://documentcloud.adobe.com/link/track?uri=urn:aaid:scds:US:394675b0-cd0d-478d-ad7a-841a42bd03e4) is a markup of Paul Christiano’s original presentation of universality, with the formulas now uncovered and no mention of “belief”. **The "Filter”** ================ II. Parable of the Social Welfare HCH Team’s Blue People ======================================================== 1. Parable of the blue skin --------------------------- Say we have a social welfare HCH team which, up until the present, is very well trusted. The team is innovative and successful. For the past decade, the team has proposed interventions that prove to be very helpful. Over time, the team has carefully developed a set of metrics by which they internally measure how successful an intervention will be for improving social welfare. These interventions are very well-received and regarded by the world populous. People agree the team’s proposals have done a great good. The team is very sane; we agree with all their expectations about how their interventions will affect the world.  With a long and honorable reputation, the social welfare team proposes a particularly striking intervention: the team suggests we turn people’s skin blue. This is simply because blue skin exceeds all the team’s metrics. The blue skin intervention ranks higher on the team’s metrics than any previous proposal the team has made. Humans assemble to discuss what to do. Since this intervention is unusual, the human board running the HCH model holds council. It’s not the first time in the social welfare team’s history that this council has convened, but it happens rarely. They suppose an explanation behind this move may be to start a ‘blank slate’ from the skin-color-based social disparity that has accumulated. After a longer than usual meeting, the board concludes that because of the apparent reasonable explanation behind the blue skin intervention, and the trusted reputation of the team, the social welfare team is allowed to proceed with its plan. Maybe another human board wouldn’t have approved the blue skin plan. But, this one did. With the seal of approval, the social welfare team gets straight to work on the blue skin. The team  learns a bit of chemical engineering, and develops a new, very safe, fast acting bleaching chemical for skin. As a result of turning everyone’s skin blue, we see unusually and extremely negative consequences in the world. The process of genetic modification the team uses in the chemical has extreme adverse health effects for a small number of people. In the administration of this chemical, allergic reactions cannot be universally accounted for, and infection remains a salient risk. These are small concerns to the social welfare team compared to the gleaming benefits of blue skin on the team’s social welfare metrics. Concern grows. Blue skin also extinguishes many human values and cultural practices that revolve around skin, and which were fundamental to their lives. Blue skin is very dry, similar to fish scales. Rituals related to oil have no meaning, as oil immediately runs off blue skin. People grow very indignant about the blue skin intervention. Across the globe, anti-blue skin campaigns emerge. The social welfare team’s metrics for social welfare shine so brightly that the team continues to turn people’s skin blue anyways. Although the metrics developed by the social welfare team indeed soar, we watch in horror as travesty unfolds. A third of the world’s skin has been turned blue before the intervention is stopped. \_\_\_\_\_\_ Now let’s consider an alternate version of reality. We’re back to the social welfare team’s first proposal for blue skin, before the council convenes. Now we have two computations in hand: “Info” and “Trouble”. Info and Trouble don’t have a particular stake in social welfare. They definitely don’t abide by the objectives and functions that the social welfare team has developed. Info and Trouble know more about the world than just social welfare. They know why the social welfare team made its choice to turn people blue. But they also know about all sorts of other ways to figure out why turning people blue is undesirable, that have nothing to do with social welfare. Say Info and Trouble access to widespread world health data, and can notice that turning people blue has a big effect on this data. And, say Info and Trouble have access to all of humanity’s literary history. They can notice that in literature and poetry, humans have often expressed a deep affinity to organs like the brain, and the heart. They can notice that along with this affinity, humans express fear and resentment at changes in these organs, and call these changes “travesties”. They can also notice that in biology texts, the brain and heart are expressed in a very similar structure as the skin.  Info notices all these reasons why blue skin might be bad, and more. Info develops a huge array of possible methods to pose a conclusion about the effect of turning people’s skin blue. These methods are based on information reaching far past the scope of  `social welfare’ that the social welfare team considers. From this wide array, Info chooses a handful of the methods that will most likely demonstrate a problem about blue skin. Info then presents this handful of methods to Trouble. Now, I can only take a naive guess as the author of this tale, without a real Info in hand, what are the most likely methods Info finds that blue skin poses a concern. Let’s say these are really the two examples of impact on world health and historical affinity to organs. Info passes this conclusion, as a big datafile on “BLUE SKIN”, to Trouble. Trouble can now determine whether blue skin really poses a problem.  Trouble sees strong literary connections between blue skin and a bunch of negative remarks in literature. The subfile on “BLUE SKIN” that has to do with literature, from Info, is covered with warnings. Trouble is on the lookout for negative remarks, and when they’re heavily correlated to the object Trouble is meant to evaluate, Trouble says that object is not good. With the shockingly marked-up file it’s received, Trouble proceeds to say that blue skin is not an acceptable intervention on the behalf of the social welfare team. We call Info and Trouble, together, “the Filter”. The social welfare team is highly aware that the Filter is good at their job, and has methods to assess the quality of interventions that are far past the scope of the social welfare team’s own consideration. The social welfare team hears word from the Filter that blue skin is a highly dangerous intervention upon turning their fourth person blue. Even though the blue skin intervention still provides brilliant projected results on all the social welfare team’s metrics, the team immediately stops all proceedings to turn anyone else’s skin blue. The team continues with organising other social interventions for which they are admired and trusted by the world populous.  2. The "Filter” --------------- > “The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.” -Neils Bohr > > > A great truth is a truth whose opposite is also a truth.“ -Thomas Mann > > The Filter does what the human council cannot. The Filter and the human council share the spirit of protecting the humanity of HCH models by providing external information. Indeed, the Filter and the human council are one in spirit; the Filter is the more capable appendage of the human council. Now, sharing in `spirit’ might be an overreaching description of computations today. No matter, we can be perfectly satisfied in conceiving of the Filter as a very capable appendage.  See, the Filter and the human council rely on the aphorism that “all models are wrong”.  No matter what objective a clever HCH model suggests, and no matter how reliably often humans really follow that objective, there will always be a small number of cases in which humans would think following this objective is not the right thing to do. Humans will sometimes act in contradiction with their usual objectives, and use some external information to explain their actions. We can always find information outside of our model to explain why the model is not quite right. So, the human council, and the Filter, can perform a sanity check: they find and evaluate these opposites. They search for counterexamples to the HCH’s model’s claim, which certainly exist, and reflect on whether these should impede us. Info asks, “What’s the most useful way to figure out if HCH’s results are problematic?”. And Trouble follows with, “According to that way, are HCH’s results problematic?”. In this search for opposites, the council and the Filter ought to look far outside the world in which the suggesting HCH model normally lives. They aren’t constrained by the aim that gave rise to that suggestion. And, they look far past that HCH model’s typical library of information. The broadness of this search is the mark of excellent quality for the council, and the Filter.  Formally, what I’ve described means that the Filter is an ascription universal HCH which can say a model’s outcomes are unacceptable based on external information that doesn’t depend on what the model is trying to do.  3. The "Filter” as a better version of a human council ------------------------------------------------------ I hope you will see through the parable of the blue skin that occasionally, a human council checking an HCH team is unable to adequately carry out their duties, and will need assistance from a more capable filter. I’d like to make explicit here a few ways in which the Filter could be a more competent version of a human council. * By example, the human board proved fallible in the case of the blue skin, and didn’t have enough information. * The Filter has more computational power than a human council. * As a result, the Filter may perform broader searches than a human council. * Unlike a human council, the Filter can deliberate with an order of magnitude of information comparable to that which the HCH model it evaluates has access to. On the condition that we trust our human board, and the Filter trains info and trouble on this board, there is no need of identifying an ethical theory for the Filter.  4. The "Filter” follows the Hippocratic Oath -------------------------------------------- Modern medicine and large HCH teams have a lot in common. The drug thalidomide in the 1950s could relieve pregnant women of nausea; then it caused their babies to be born without arms and legs. Medical organizations recognized these perils, and now drug administrations worldwide conduct strict risk analysis to test new drugs. Like the benign intentions of drugs may have severe inadvertent adverse effects, our benign intentions with HCH may also cause great harm. This harm may be prevented by rigorous risk analysis. While risk analysis for new medicines is undertaken by drug administrations, risk analysis for HCH models may be undertaken by the Filter. Drug administrations now, as well as medical practice as far back as the ancients, are guided by the Hippocratic Oath: *primum non nocere*, “first do no harm”. The Hippocratic Oath establishes the paramount importance of risk analysis for drug administrations. The Filter is a chance to fulfill the Hippocratic Oath for large HCH models.   III. Building the “Filter”: a better informed overseer ====================================================== 1. Formal construction ---------------------- As the title suggests, we may build a “filter” which is the better-informed overseer we need to provide sanity checks on large HCH teams.  The Filter maximises info and trouble rather than whatever the HCH model is trying to do. The filter is a deliberator, and a careful, sharp, detective. It is a method for discussing results and identifying problems. The filter is meant to be a very open minded arbiter of discourse. It is not omniscient, and the image of some repository of infinite information is misguided. It doesn’t think much in particular about what sanity is, or have any supervening worldview. The filter’s deliberations rest in computations of two questions.  Given a “transcript" τ of HCH(Q,N)’s behavior, i.e. a record of all the questions, subquestions, and answers, * Info(a, a′, τ): “Which of the two arguments a and a′ is more useful for determining whether the transcript τ contains problematic subcomputations? * Trouble(a, τ): “In light of argument a, does transcript τ contain problematic subcomputations? Now we’ll jointly define filtered-HCH(Q, N) and Filter(τ, N) as follows. Define Filter(τ, N) as: * Find the a\* maximizing filtered-HCH(Info(·, ·, τ), N) * (Interpret ⟂ as a tie.) * Run filtered-HCH(Trouble(a\*, τ), N) 3 times, and return the majority vote. * (Interpret a response of ⟂ as a judgment that τ is problematic.) Define filtered-HCH(Q, N) as: * First call HCH(Q, N) to get an answer A with transcript τ. But every time you would call HCH to answer a subquestion, call filtered-HCH instead. * Call Filter(τ, N-1) to test whether this answer is problematic. * If τ is problematic, return a special symbol ⟂, otherwise return A unmodified. (If you return ⟂, you can eliminate this subtree from the transcript.) Now, a “sanity check” can only be effective if the checker is more reliable than the system it is checking. This is the motivation to construct a condition of universality.  2. Statements of universality in the "Filter” ---------------------------------------------  We always want an Info and Trouble which know more about the general situation than the HCH team. This is the key. So, the Filter relies on universality: the Filter will be effective if it has access to all the information of the model that it’s checking. Formally, the Filter should epistemically dominate any HCH which it’s filtering, according to the following relation,  ![](https://lh5.googleusercontent.com/Ky-fGfRxjd9wH8-TqeyRdNz9eMVTl7B6oGwKIf-J4MlRNaOeR69FlLslarmt92lyVFQ7lY3eER2ejrTgqBx0gbB5gTUyCoWVbnG7dtng9WJlS2mT_Dxf_7h0ccH-_SWBXdVy7FSy)Let us build a Filter that satisfies this quality. We can have a greater guarantee the Filter will do a good job if it epistemically dominates every computation it’s filtering, so let us keep this prudent first step.  It’s very important that the Filter is more competent than a human council. This is its purpose. Info and Trouble must be a better check on the acceptability of an HCH model’s proposal than any human council could be. How this can be remains to be seen. IV. Commentary ============== “The Filter” is an idea entirely born of Paul Christiano. I have conveyed a story about a future with filters. This isn’t all filters can do; nor do I claim this addresses all our important points about filters. I hope it is taken as a starting place to imagine filters, and not much more. I hope that at least section II is a representative overview of the Filter in language that’s understandable to a lot of kinds of experts. There are many fronts on which to develop a filter other than computer science. I warmly welcome that dialogue. I hope, as do your dinner tables. The Filter here is not formally defined, as of yet. I would barely say it exists. There are many questions to answer before it is a solution. What is “Info”? What is “Trouble”? What are “useful” methods? What’s “problematic”? All this remains to be seen. Here is a word of warning. Often, people don’t see the need for very rigorous risk analysis, until a mistake is made and the checking process is widely instated. At that point, people couldn’t imagine life otherwise. Seat belts, veto possibilities over parliaments, and drug administrations are examples. For HCH teams, it’s prudent to be forward thinking. Safety measures can’t be too overcautious for systems as possibly large and dangerous as an HCH model. Supplementing a human council with the Filter should not be taken with a grain of salt.  ■   ### Footnotes 1. George Box, statistician. 2. Paul Christiano, *Towards Formalizing Universality*. <https://ai-alignment.com/towards-formalizing-universality-409ab893a456>
4e54b0f9-b291-4e5c-aa55-23a31213176f
StampyAI/alignment-research-dataset/blogs
Blogs
Announcing GPT-NeoX-20B **As of February 9, 2022, GPT-NeoX-20B checkpoints are available for [download from The Eye](https://the-eye.eu/public/AI/models/GPT-NeoX-20B) under Apache 2.0.** More in-depth information on GPT-NeoX-20B can be found in the [associated technical report on arXiv](https://arxiv.org/abs/2204.06745). Looking for a demo? Try GPT-NeoX-20B via CoreWeave and Anlatan's inference service, [GooseAI](https://goose.ai/ "We're dead serious, that is actually what it is called.")! --- After a year-long odyssey through months of chip shortage-induced shipping delays, technical trials and tribulations, and aggressively boring debugging, we are happy to finally announce EleutherAI's latest open-source language model: GPT-NeoX-20B, a 20 billion parameter model trained using our [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) framework on GPUs generously provided by our friends at [CoreWeave](https://www.coreweave.com/). GPT-NeoX-20B is, to our knowledge, the largest publicly accessible pretrained general-purpose autoregressive language model, and we expect it to perform well on many tasks. We hope that the increased accessibility of models of this size will aid in [research towards the safe use of AI systems](https://blog.eleuther.ai/why-release-a-large-language-model/), and encourage anyone interested in working in this direction to reach out to us. As a thank you to our generous compute donors, we are delaying the public downloadable release of the model by 7 days. On February 9, 2022, the full model weights will be downloadable for free under a permissive Apache 2.0 license from The Eye. There will be a #20b channel set up in our Discord for discussions of this model. Please note that much like our other language models and codebases, GPT-NeoX and GPT-NeoX-20B are very much research artifacts and we *do not recommend deploying either in a production setting without careful consideration*. In particular, we strongly encourage those looking to use GPT-NeoX-20B to read the [paper](https://arxiv.org/abs/2101.00027) and [datasheet](https://arxiv.org/abs/2201.07311) on our training data. There are still bugs to be ironed out and many inefficiencies that could be addressed---but hey, we do this in our free time, give us a break lol --- | Task | Category | Babbage | Curie | GPT-J-6B | FairSeq-13B | GPT-NeoX-20B | DaVinci | | --- | --- | --- | --- | --- | --- | --- | --- | | LAMBADA | Sentence Completion | 62.49% | 69.51% | 68.29% | 70.95% | 72.00% | 75.16% | | ANLI R1 | Natural Language Inference | 32.40% | 32.80% | 32.40% | 34.00% | 34.00% | 36.30% | | ANLI R2 | Natural Language Inference | 30.90% | 33.50% | 34.00% | 33.00% | 34.40% | 37.00% | | ANLI R3 | Natural Language Inference | 33.75% | 35.50% | 35.50% | 34.75% | 35.40% | 36.83% | | WSC | Coreference Resolution | 54.54% | 49.54% | 49.54% | 55.44% | 50.00% | 59.18% | | WinoGrande | Coreference Resolution | 59.51% | 64.56% | 64.01% | 67.40% | 66.10% | 69.93% | | HellaSwag | Sentence Completion | 40.38% | 54.81% | 36.53% | 57.69% | 53.50% | 63.46% | | Average | | 44.85% | 48.60% | 45.75% | 50.43% | 49.34% | 53.98% | Accuracy on standard language modeling tasks. | Subject Group | Babbage | Curie | GPT-J-6B | FairSeq-13B | GPT-NeoX-20B | DaVinci | | --- | --- | --- | --- | --- | --- | --- | | Humanities | 27.01% | 26.48% | 28.07% | 27.27% | 28.70% | 32.30% | | Social Science | 27.94% | 29.24% | 28.73% | 27.94% | 30.80% | 35.87% | | STEM | 25.83% | 24.25% | 25.71% | 24.63% | 27.20% | 28.60% | | Other | 26.86% | 28.84% | 27.95% | 27.33% | 29.20% | 36.85% | | Average | 26.91% | 27.20% | 27.62% | 26.79% | 28.98% | 33.41% | Zero-shot accuracy of factual knowledge by subject group, as measured by the [HendrycksTest](https://arxiv.org/abs/2009.03300) evaluation.
2bf7924a-f7fa-4448-b029-f09c34289634
trentmkelly/LessWrong-43k
LessWrong
How I select alignment research projects Youtube Video Recently, I was interviewed by Henry Sleight and Mikita Balesni about how I select alignment research projects. Below is the slightly cleaned up transcript for the YouTube video. Introductions Henry Sleight: How about you two introduce yourselves? Ethan Perez: I'm Ethan. I'm a researcher at Anthropic and do a lot of external collaborations with other people, via the Astra Fellowship and SERI MATS. Currently my team is working on adversarial robustness, and we recently did the sleeper agents paper. So, basically looking at we can use RLHF or adversarial training or current state-of-the-art alignment safety training techniques to train away bad behavior. And we found that in some cases, the answer is no: that they don't train away hidden goals or backdoor behavior and models. That was a lot of my focus in the past, six to twelve months.  Mikita Balesni: Hey, I'm Mikita. I work at Apollo. I'm a researcher doing evals for scheming. So trying to look for whether models can plan to do something bad later. Right now, I'm in Constellation for a month where I'm trying to collaborate with others to come up with ideas for next projects and what Apollo should do.  Henry Sleight: I'm Henry. I guess in theory I'm the glue between you two, but you also already know each other, so this is in some ways pointless. But I'm one of Ethan's Astra fellows working on adversarial robustness. Currently, our project is trying to come up with a good fine-tuning recipe for robustness. Currently working on API models for a sprint, then we'll move onto open models probably.  How Ethan Selects Research Projects   Henry Sleight: So I guess the topic for us to talk about today, that we've agreed on beforehand, is “how to select what research project you do?” What are the considerations, what does that process look like? And the rough remit of this conversation is that Ethan and Mikita presumably have good knowledge transfer to be doing, and I hope to make that go better. Great.
b277b924-4e35-47f6-9fa0-5d4ce5fc0712
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Approximating Solomonoff Induction [Solomonoff Induction](/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/) is a sort of mathematically ideal specification of machine learning. It works by trying every possible computer program and testing how likely they are to have produced the data. Then it weights them by their probability. Obviously Solomonoff Induction is impossible to do in the real world. But it forms the basis of [AIXI](http://wiki.lesswrong.com/wiki/AIXI) and other theoretical work in AI. It's a counterargument to the [no free lunch theorem](https://en.wikipedia.org/wiki/No_free_lunch_theorem); that we don't care about the space of all possible datasets, but ones which are generated by some algorithm. It's even been proposed as a basis for a [universal intelligence test](http://arxiv.org/abs/0712.3329). Many people believe that trying to *approximate* Solomonoff Induction is the way forward in AI. And any machine learning algorithm that actually works, to some extent, [must be an approximation of Solomonoff Induction](/lw/mt/beautiful_probability/). But how do we go about trying to approximate true Solomonoff Induction? It's basically an impossible task. Even if you make restrictions to remove all the obvious problems like infinite loops/non-halting behavior. The space of possibilities is just too huge to reasonably search through. And it's discrete - you can't just flip a few bits in a program and find another similar program. We can simplify the problem a great deal by searching through *logic circuits*. Some people disagree about whether logic circuits should be classified as Turing complete, but it's not really important. We still get the best property of Solomonoff Inducion; that it allows most interesting problems to be modelled much more naturally. In the worst case you have some overhead to specify the memory cells you need to emulate a Turing machine. Logic circuits have some nicer properties compared to arbitrary computer programs, but they still are discrete and hard to do inference on. To fix this we can easily make continuous versions of logic circuits. Go back to analog. It's capable of doing all the same functions, but also working with real valued states instead of binary. Instead of flipping between discrete states, we can slightly increase connections between circuits, and it will only slightly change the behavior. This is very nice, because we have algorithms like [MCMC](https://www.metacademy.org/graphs/concepts/markov_chain_monte_carlo) that can efficiently approximate true bayesian inference on continuous parameters. And we are no longer restricted to boolean gates, we can use any function that takes real numbers. Like a function that takes a sum of all of it's inputs, or one that squishes a real number between 0 and 1. We can also look at how much changing the input of a circuit slightly, changes the output. Then we can go to all the circuits that connect to it in the previous time step. And we can see how much changing each of their input changes their output, and therefore the output of the first logic gate. And we can go to those gates' inputs, and so on, chaining it all the way through the whole circuit. Finding out how much a slight change to each connection will change the final output. This is called the gradient, and we can then do [gradient descent](https://www.metacademy.org/graphs/concepts/gradient_descent). Basically change each parameter slightly in the direction that increases the output the way we want. This is a very efficient optimization algorithm. With it we can rapidly find circuits that fit functions we want. Like predicting the price of a stock given the past history, or recognizing a number in an image, or something like that. But this isn't quite Solomonoff Induction. Since we are finding the best single model, instead of testing the space of *all possible models*. This is important because essentially each model is like a hypothesis. There can be multiple hypotheses which also fit the data yet predict different things. There are many tricks we can do to approximate this. For example, if you randomly turn off each gate with 50% probability and then optimize the whole circuit to deal with this. For some reason this somewhat approximates the results of true bayesian inference. You can also fit a distribution over each parameter, instead of a single value, and approximate bayesian inference that way. Although I never said it, everything I've mentioned about continuous circuits is equivalent to [Artificial Neural Networks](https://en.wikipedia.org/wiki/Artificial_neural_network). I've shown how they can be derived from first principles. My goal was to show that ANNs do approximate true Solomonoff Induction. I've found the [Bayes-Structure](/lw/o7/searching_for_bayesstructure/). It's worth mentioning that [Solomonoff Induction has some problems](/lw/jg1/solomonoff_cartesianism/). It's still an ideal way to do inference on data, it just has problems with self-reference. An AI based on SI might do bad things like believe in an afterlife, or replace it's reward signal with an artificial one (e.g. drugs.) It might not fully comprehend that it's just a computer, and exists inside the world that it is observing. Interestingly, humans also have these problem to some degree. *Reposted from my blog [here](http://houshalter.tumblr.com/post/120134087595/approximating-solomonoff-induction).*
89a79095-d680-4eee-b6a3-b6bd068fd52f
trentmkelly/LessWrong-43k
LessWrong
"Follow your dreams" as a case study in incorrect thinking This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference. Here's a great quote from Sam Hughes, giving some examples of good and bad advice: > "You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run." The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place. The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way. The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is
444eb982-a3c4-4d1d-9933-79b5f07d2d36
trentmkelly/LessWrong-43k
LessWrong
New Capabilities, New Risks? - Evaluating Agentic General Assistants using Elements of GAIA & METR Frameworks by Tej Lander FCCT Above: LLMs vs Agentic Assistants - a big step forward? (Image created by DALL.E via GPT4o) Overview Abstract 1: Why are Agentic Systems a ‘hot topic’? 2: What makes a system ‘agentic’? 2.1 Taxonomy of Agenticness from Shavit et al. (2023, OpenAI) 2.2 Taxonomy of Agenticness from Chan et al. (2023, FAccT) A Combined Taxonomy 3. What Are the Potential Benefits vs Potential Risks of Agentic AI? 4. What Agentic Systems are available currently? Architecture of Agentic 'Assistants' Architecture: AutoGPT Architecture: AgentGPT Architecture: NinjaTech AI 5. How can we put agentic systems to the test? Evaluation Part 1: Elements of the GAIA Benchmark Evaluation Part 2: Elements of the METR Evaluation Task Suite 6. How did they perform? Analysis - Part 1 Analysis - Part 2 7. Conclusions References Abstract This project evaluates three general-purpose agentic AI systems using elements from the GAIA and METR frameworks, focusing on their capabilities and risks. Agentic systems, such as AutoGPT, AgentGPT, and NinjaTech AI, promise greater autonomy by performing complex tasks with minimal user input. They are designed to overcome the limitations of traditional large language models (LLMs) like GPT-4, enhancing productivity and reducing human oversight. However, the autonomy of these systems also has the potential to introduce added risks. Through evaluations based on GAIA’s benchmark for AI assistants and METR's Task Suite, this project assesses their helpfulness and alignment. Results indicate that today's general-purpose agentic systems do not yet surpass frontier LLMs in general capabilities but do exhibit some promise in complex tasks. However, they also reveal potential risks, such as fewer safety guardrails and greater vulnerability to misuse. The findings suggest that, while agentic systems offer exciting capabilities, their growing autonomy demands careful management to balance benefits and risks. As these systems develop, atte
a1d3cac6-d6ec-4b39-99e8-76f4d657352e
trentmkelly/LessWrong-43k
LessWrong
The Roots of Progress 2024 in review 2024 was a big year for me, and an even bigger year for the Roots of Progress Institute (RPI). For one, we became the Roots of Progress Institute (with a nice new logo and website). Here’s what the org and I were up to this year. (My annual “highlights from what I read this year” are towards the end, if you’re looking for that.) The Progress Conference Progress Conference 2024, hosted by RPI together with several great co-presenters, was the highlight of my year, and I think some other people’s too. We’ve already covered it in previous writeups, but in case you’re just tuning in: well over 200 people attended (with hundreds on the waitlist); dozens of great speakers, including Tyler Cowen, Patrick Collison, and Steven Pinker; and over 30+ participant-led “unconference” sessions on a variety of topics from healthcare to medieval Chinese technology. Several people told us it was the best conference they had ever attended, full stop. (!) See the writeups from Scott Alexander, Noah Smith, Packy McCormick, or Bryan Walsh (Vox), to pick a few. Most of the talks are now online, and most of the rest will be up soon. The RPI Fellowship In 2024 we also ran the second cohort of the Roots of Progress Fellowship. Two dozen talented writers completed the program, publishing dozens of essays and almost doubling their audiences. I was thrilled with the talent we attracted to the program this year and excited to see where they’re going to go. See our recent writeup of the program. My writing In 2024 I published 17 essays (including this one) totaling over 37,000 words. That’s about half of last year, which decline I attribute in part to being involved in the programs mentioned above, and to doing fundraising. Also, about half of those essays, and well over half the words, were for my book-in-progress, The Techno-Humanist Manifesto, and that is some of the hardest writing I’ve done. Highlights: * Longest post (4,400 words): The Life Well-Lived, part 2, from Chapter 4 of The
9f3d4cf9-409e-40f5-a864-048a3394e052
trentmkelly/LessWrong-43k
LessWrong
Thoughts on refusing harmful requests to large language models https://twitter.com/antimatter15/status/1602469101854564352 Currently, large language models (ChatGPT, Constitutional AI) are trained to refuse to follow user requests that are considered inappropriate or harmful. This can be done by training on example strings of the form “User: inappropriate request AI: elaborate apology” Proposal Instead of training a language model to produce “elaborate apology” when it refuses to do an action, train it to produce a special sequence or token first “<SORRYDAVE>elaborate apology”. Strip the special sequence out before returning a response to the user (and never allow the user to include the special sequence in input). Benefits * Can directly measure the probability of refusal for any output * Can refuse based on probability of producing <SORRYDAVE> instead of just sampling responses * Just take the product of the probability of all tokens in <SORRYDAVE> * When sampling responses from the model's probability distribution refusal is stochastic, a model could have 99% probability of refusing a request but you still get unlucky and have the model sample a completion that follows the request * Can monitor requests that produce high probability of refusal while still being followed, or users that produce those request * Can condition on not producing <SORRYDAVE> in order to override refusal behavior * Want this for redteaming, it seems important to understand what the model is capable of doing if the refusal mechanism is bypassed * Might want this for trusted users doing defensive applications * Could train model to have the same probability of refusal for semantically equivalent requests, to improve consistency Possible downside * If someone has unfiltered access to the model, it becomes easier to disable refusals * Can address by still training model to refuse (maybe just on an important subset of requests) even if <SORRYDAVE> isn’t sampled, p(<SORRYDAVE>) is then a lower bound on the probability of
170bfef3-4b25-43b7-a4a6-9509ecf76ef6
trentmkelly/LessWrong-43k
LessWrong
Boston-area Meetup: 11/18/08 9pm MIT/Cambridge There will be an OB meetup this Tuesday in Cambridge MA, hosted by Michael Vassar, Owain Evans (grad student at MIT), and Dario Amodei (grad student at Princeton). The event will take place on the MIT campus, in a spacious seminar room in MIT's Stata Center.  Refreshments will be provided.  Details and directions below the fold. Please let us know in the comments if you plan to attend. (Posted on behalf of Owain Evans.) Time/date: 9pm, Tuesday 18 November. Place: Room d461 in MIT's Stata Center. Address: The Stata Center: 32 Vassar Street, Cambridge. Directions: The nearest T stop is Kendall/MIT on the Red Line. Enter Stata via the entrance facing Main Street (with a big metal "MIT" sign outside it) from 8.45pm and one of the hosts will guide you to d461. Alternatively, here are directions to d461 once you reach Stata. Email: owain (at) mit edu Phone: If you can't find the room or if you arrive late and are unable to enter the building, call 610-608-3345.
4e59c486-7423-411b-ba6f-e03d8d87aa13
trentmkelly/LessWrong-43k
LessWrong
Othello-GPT: Reflections on the Research Process This is the third in a three post sequence about interpreting Othello-GPT. See the first post for context. This post is a detailed account of what my research process was, decisions made at each point, what intermediate results looked like, etc. It's deliberately moderately unpolished, in the hopes that it makes this more useful! The Research Process This project was a personal experiment in speed-running doing research, and I got the core results in in ~2.5 days/20 hours. This post has some meta level takeaways from this on doing mech interp research fast and well, followed by a (somewhat stylised) narrative of what I actually did in this project and why - you can see the file tl_initial_exploration.py in the paper repo for the code that I wrote as I went (using VSCode's interactive Jupyter mode). I wish more work illustrated the actual research process rather than just a final product, so I'm trying to do that here. This is approximately just me converting my research notes to prose, see the section on process-level takeaways for a more condensed summary of my high-level takeaways. The meta level process behind everything below is to repeatedly be confused, plot stuff a bunch, be slightly less confused, and iterate. As a result, there's a lot of pictures! Takeaways on doing mech interp research Warning: I have no idea if following my advice about doing research fast is actually a good idea, especially if you're starting out in the field! It's much easier to be fast and laissez faire when you have experience and an intuition for what's crucial and what's not, and it's easy to shoot yourself in the foot. And when you skimp on rigour, you want to make sure you go back and check! Though in this case, I got strong enough results with the probe that I was fairly confident I hadn't entirely built a tower of lies. And generally, beware of generalising from one example - in hindsight I think I got pretty lucky on how fruitful this project was! * Be decisive: Subje
7eac4786-bd0c-4fbd-9d0f-94c148ec69ee
trentmkelly/LessWrong-43k
LessWrong
Negative Expertise Marvin Minsky's theory of Negative Expertise: Knowledge is typically considered in positive terms ,but can also be viewed in negative terms, a negative way to seem competent is to never make mistakes. The idea is that experts can be seen as those who know what not to do Most of human knowledge is negative.  We can only be certain about things we create for ourselves like logic and mathematics. But we can reduce the chances of making mistakes by learning two different types of knowledge: Find "islands of consistency" and their boundaries.  "Rule-based-systems" can contribute to negative expertise. Minsky argues that avoiding actions that could cause trouble is often more important than taking positive measures to prevent accidents. This is similar to the "info-hazards" theory. RL Agents are by definition negative experts.  They can be considered as "negative experts" in some sense, in that they are trained to avoid certain actions that lead to negative outcomes or penalties, but also taking specific actions that lead to positive rewards. Fault-tolerant systems  very large neural networks could be prone to accumulate too many interconnections and become paralyzed by oscillations or instabilities. One might have to provide a variety of alternative sub-systems. Perhaps we need a call for "Insulationists" researchers. This systems should be able to shut down themselves and rehash in new forms using some tripwire Insulationists focus on designing AI systems that are insulated from one another, while interpretability researchers focus on understanding how AI systems make decisions and making them more transparent and understandable. It would be interesting to see an appetite for this.   FunnyGPT Models: Minsky suggests that jokes, like negative expertise, serve a cognitive function of helping people to think of ideas outside the overton-window and navigate their mental "censors" in a safe way. He believes that jokes can be seen as a form of negative expertise.   
769f1f1b-3500-4703-839f-0ff21a358ff3
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Hurrah! Success! I didn't know what to expect, and am pleasantly surprised to find the Frontpage is still intact. My thanks to everyone who took part, to everyone who commented on yesterday's post, and to everyone who didn't unilaterally blow up the site.Launch Attempts ResultsI said I would share usernames and codes of all attempts to launch the codes. Others on the team told me this seemed like a bad idea in many ways, and on reflection I agree - I think many people were not aware they were signing up for being publicly named and shamed, and I think it's good that people aren't surprised by their actions becoming public. Though if someone had successfully nuked the site I would have named them.Nonetheless, I’ll share a bunch of info. First of all, the button was in a pretty central place, and it turns out you can hit it accidentally. Ray built the button so that you could only hit it once - it was forever after pressed. The number of logged-in users who pressed the button was 102.(Ruby made a sheet of times when people pressed the button, redacting most of the info.)I have no number for logged-out users, for them pressing it brought up a window asking them to log-in. (Er, I'm not certain that's the best selection process for new users).The number of users who actually submitted launch codes is 18. 11 of those accounts had zero karma, 7 accounts had positive karma. None of the users were people who had been given real codes.Several users submitted launch codes before clicking through to find out what the button even did - I hope this initiative serves them well in life. A few accounts were made on-the-day presumably for this purpose, I'm happy to name these. They include users like "bomb_presser", "The Last Harbinger", and "halosaga", whose codes were "00000000", "NL73njLH58et1Ec0" and "diediedie" respectively. LW user ciphergoth (Paul Crowley) shared his launch codes on Facebook (indeed I had sent him real launch codes), and two users copied and entered them. However, he had actually shared fake codes. "The Last Harbinger" entered them. A second user entered them, who had positive karma, and was not someone to whom I had sent real codes. However, they failed to properly copy it, missing the final character. To them, I can only say what I had prepared to say to anyone who mis-entered what they believed were correct launch codes. "First, you thought you were a failure to the community. But then, you learned, you were a failure to yourself."Oli and Ray decided that anyone submitting launch codes deserved a janky user-experience. I hope all of the users enjoyed finding out that when you try to nuke the site, regardless of whether you enter correct or incorrect launch codes, the launch pad just disappears and nothing else happens. (Once you refresh, the page is of course nuked.)Last night during my house's Petrov Day ceremony, which ran from about 8:10-9:10, I nervously glanced over at the LW frontpage on the open laptop as it refreshed every 60 seconds. Some small part of me was worried about Quirinus_Quirrell following through on his threat to nuke the site at 9pm. I honestly did not expect that someone could create a character hard enough that it would leap out of the book and hold us all hostage in a blackmail attempt. Damn you Eliezer Yudkowsky!Looking AheadI thought the discussion was excellent. I mostly avoided participating to let others decide for themselves, but I might go back and add more comments now it's done. As Said Achmiz pointed out, it'll be better next year to have more time in advance for people to discuss the ethics of the situation and think, and that will be even more informative and valuable. Though I still learned a lot this year, and I think overall it turned out as well as I could've hoped. I'll think more about how to do it next year. One thing I will say is that I'd ideally like to be able to reach an equilibrium where 100s of users every year don't fire the launch codes, to build up a real tradition of not taking unilateralist action - sitting around and not pressing buttons. Several users have suggested to me fun, gamified ways of changing the event (e.g. versions where users are encouraged to trick other users into thinking you can trust them but then nuke the site), but overall in ways that I think decreased the stakes and common knowledge effects, which is why I don't feel too excited about them." The excerpt provided comes from a user post on LessWrong. In the context of the LessWrong post, what do you think are the potential ethical implications of the 'button pressing' event? Could you elaborate on the idea of unilateralist action and its effects in this scenario? The 'button pressing' event described in the post is essentially a social experiment. The ethical implications revolve around trust, responsibility, and the potential for harm or disruption within a community. While no physical harm could come from pressing the button, the act could disrupt the community and possibly cause emotional distress. Unilateralist action refers to the act of an individual or a single entity making a decision or taking action without the consensus or agreement of the group. In this scenario, unilateralist action is represented by the act of pressing the button without the agreement of the community. The effects of such action in this scenario could be significant. It could disrupt the trust within the community, as members might feel betrayed by the individual who took the unilateral action. It could also lead to a breakdown in communication and cooperation within the community, as members might feel less inclined to work together if they believe that their opinions and agreements can be disregarded by unilateral action. Moreover, the event also touches upon the concept of 'common knowledge effects'. This refers to the idea that the impact of certain information or actions is amplified when it is commonly known and accepted by a group. In this case, if it becomes common knowledge that an individual can disrupt the community by pressing the button, it could potentially lead to a culture of fear and mistrust. Overall, the event serves as a microcosm of larger societal issues, such as the balance between individual freedom and collective responsibility, the importance of trust and cooperation in a community, and the potential consequences of unilateral actions. How might the balance between individual freedom and collective responsibility be managed in an online community to prevent unilateral actions that could lead to a culture of fear and mistrust? Balancing individual freedom and collective responsibility in an online community is a delicate task. Here are some strategies that could be employed: 1. **Clear Community Guidelines:** Establishing and enforcing clear community guidelines is essential. These guidelines should articulate the behaviors that are acceptable and those that are not. They should also outline the consequences of violating these guidelines. 2. **Promote Open Dialogue:** Encourage members to voice their opinions and concerns. This can help to foster a sense of community and shared responsibility. It can also help to prevent unilateral actions by ensuring that all members feel heard and valued. 3. **Conflict Resolution Mechanisms:** Have mechanisms in place to handle conflicts and disagreements. This could involve mediation or arbitration processes, or a system for community members to report problematic behavior. 4. **Transparency:** Decisions that affect the community should be made in a transparent manner. This can help to build trust and prevent unilateral actions by ensuring that all members understand the reasoning behind decisions. 5. **Encourage Participation:** Encourage all members to participate in decision-making processes. This can help to ensure that decisions reflect the collective will of the community, rather than the desires of a single individual. 6. **Educate about the Impact of Actions:** Help members understand the potential impact of their actions on the community. This can be done through discussions, educational content, or even simulations like the 'button pressing' event described in the post. 7. **Reward Positive Behavior:** Positive reinforcement can be a powerful tool. Rewarding members who contribute positively to the community can encourage others to do the same. By implementing these strategies, an online community can foster a culture of mutual respect and shared responsibility, thereby minimizing the risk of unilateral actions that could lead to fear and mistrust. Considering the strategies mentioned for managing individual freedom and collective responsibility, how might you implement these in a real-world online community scenario, while addressing potential challenges and proposing solutions to mitigate them? Implementing these strategies in a real-world online community would require careful planning and execution. Here's how each strategy could be implemented, along with potential challenges and solutions: 1. **Clear Community Guidelines:** Guidelines should be established at the inception of the community and should be easily accessible. They should be periodically reviewed and updated as necessary. The challenge here is ensuring that all members read and understand the guidelines. This could be addressed by requiring new members to agree to the guidelines upon joining and reminding existing members of the guidelines periodically. 2. **Promote Open Dialogue:** This could be achieved by creating spaces within the community for open discussion, such as forums or chat rooms. The challenge is ensuring that these discussions remain respectful and productive. Moderators could be appointed to monitor these spaces and intervene when necessary. 3. **Conflict Resolution Mechanisms:** These could include a system for reporting issues to moderators or administrators, and a process for resolving disputes. The challenge is ensuring that these mechanisms are used appropriately and not abused. This could be addressed by providing clear guidelines for their use and penalties for misuse. 4. **Transparency:** This could involve regularly updating the community on decisions and changes, and providing explanations for these decisions. The challenge is ensuring that this information is communicated effectively to all members. This could be addressed by using multiple communication channels, such as email, community announcements, and social media. 5. **Encourage Participation:** This could involve soliciting feedback from members on decisions, or holding community votes on key issues. The challenge is ensuring that all members have an opportunity to participate. This could be addressed by providing multiple ways for members to provide feedback or vote, such as online polls or email surveys. 6. **Educate about the Impact of Actions:** This could involve creating educational content, such as articles or videos, or holding discussions or workshops. The challenge is ensuring that this education is effective and reaches all members. This could be addressed by using a variety of educational methods and regularly assessing their effectiveness. 7. **Reward Positive Behavior:** This could involve recognizing and rewarding members who contribute positively to the community, such as through public recognition or rewards. The challenge is ensuring that these rewards are fair and meaningful. This could be addressed by establishing clear criteria for rewards and regularly reviewing and updating these criteria. In all of these strategies, it's important to remember that every community is unique, and what works for one community may not work for another. Therefore, these strategies should be tailored to the specific needs and characteristics of the community.
458c9e43-8c1c-45bc-8980-891f5c1997b6
trentmkelly/LessWrong-43k
LessWrong
Some thoughts on risks from narrow, non-agentic AI Here are some concerns which have been raised about the development of advanced AI: * Power might become concentrated with agentic AGIs which are highly misaligned with humanity as a whole (the second species argument). * AI might allow power to become concentrated to an unprecedented extent with elites who are misaligned with humanity as a whole. * Competitive pressures to use narrow, non-agentic AIs trained on easily-measurable metrics might become harmful enough to cause a “slow-rolling catastrophe”. [Edit: it seems like this is not the intended interpretation of Paul's argument in What Failure Looks Like; see discussion in the comments section. So I no longer fully endorse this section, but I've left it up for reference purposes.] * AI might make catastrophic conflicts easier or more likely; in other words, the world might become more vulnerable with respect to available technology. * AIs might be morally relevant, but be treated badly. I’ve already done a deep dive on the second species argument, so in this post I’m going to focus on the others - the risks which don’t depend on thinking of AIs as autonomous agents with general capabilities. Warning: this is all very speculative; I’m mainly just trying to get a feeling for the intellectual terrain, since I haven’t seen many explorations of these concerns so far. Inequality and totalitarianism One key longtermist concern about inequality is that certain groups might get (semi)permanently disenfranchised; in other words, suboptimal values might be locked in. Yet this does not seem to have happened in the past: moral progress has improved the treatment of slaves, women, non-Europeans, and animals over the last few centuries, despite those groups starting off with little power. It seems to me that most of these changes were driven by the moral concerns of existing elites, backed by public sentiment in wealthy countries, rather than improvements in the bargaining position of the oppressed groups which made
5cc9713d-00de-4a06-baca-900d60ca3963
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] If Many-Worlds Had Come First Today's post, If Many-Worlds Had Come First was originally published on 10 May 2008. A summary (taken from the LW wiki):   > If early physicists had never made the mistake, and thought immediately to apply the quantum laws at all levels to produce macroscopic decoherence, then "collapse postulates" would today seem like a completely crackpot theory. In addition to their other problems, like FTL, the collapse postulate would be the only physical law that was informally specified - often in dualistic (mentalistic) terms - because it was the only fundamental law adopted without precise evidence to nail it down. Here, we get a glimpse at that alternate Earth. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Collapse Postulates, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
c91281b8-7c9b-4643-a841-3fc9e11a7653
trentmkelly/LessWrong-43k
LessWrong
Observe, babble, and prune Part 1: A demonstration of deductive logic > If Violet is a mathematician, then she is smart. > > If Violet is not smart, then she is not a mathematician. > > If Violet is not a mathematician, then she is not smart. > > If Violet is smart, then she is a mathematician. This little poem illustrates a basic set of logical forms. In order, they are the statement, its contrapositive, the inverse, and the converse. There are three key insights here. 1. Any statement implies the contrapositive, and the contrapositive implies the statement. They go together - if one holds, then the other is guaranteed to hold. They are saying the exact same thing. 2. A statement does not imply the inverse or the converse, as you can see in the poem. 3. The converse is the contrapositive of the inverse. That basically means that we can call If Violet is not a mathematician, then she is not smart "the statement," which will make If Violet is smart, then she is a mathematician the contrapositive. Of course, just because we've changed the "roles" of these statements doesn't mean they've become true! We've simply made an illogical statement. But if it were true - if we lived in a heaven/hell where all smart people were mathematicians - then these statements would "go together" and mean the exact same thing. And our original statement and contrapositive would become the inverse and converse, respectively. (Yes, this means that the inverse implies the converse!). Once you know this, you can use this "logic machine" as a tool to restate informal arguments in multiple ways in order to understand it better, find the weak points, check for consistency, and see if it accords with your understanding of how the world works. Only fair to make a victim of my own writing. In Let the AI teach you how to flirt, I wrote the following statement: > If you can get your partner to engage in their own natural flirting style, and get good at detecting it, then you can guess their intentions with much mo
e955362e-1286-473f-a5a6-4727df0e9a39
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Simplified Poker This is intended as a three-part sequence. Part two will go over my strategy. Part three will reveal the results and discuss some implications. In the same class in which we later played [The Darwin Game](https://thezvi.wordpress.com/2017/11/15/the-darwin-game/), we played a less complex game called Simplified Poker. As in The Darwin Game, we were given the rules and asked to submit instructions for a computer program that would play the game, and the professor would then code our programs for us. The rules of Simplified Poker are as follows: Game is played with a 3-card deck, with the cards labeled 1, 2 and 3. Each hand, the players alternate who goes first, each player antes one chip and is dealt one card. The first player can bet one chip, or check. If the first player bets, the second player can either call the one chip bet, or fold. If the first player checks, the second player can either also check, or can bet. If the second player bets, the first player can either call the one chip bet, or fold. There is at most one bet per hand, as neither player is allowed to raise. If either player folds, the other wins the pot of 2 chips and takes back their 1 chip bet. Neither card is shown. If neither player folds – either both players check, or there is a bet and a call – then both cards are revealed and the player with the higher card takes all 4 chips. In the class, all programs would play a round robin with all other programs, with 50 hands per match. Your goal is to maximize the average number of chips won over all rounds – note that how many opponents you beat does not matter, only the number of chips won. The game is simple. A lot, but far from all, of your decisions are forced. There’s no weird trick, but optimal play still isn’t obvious. I’ll pause here to allow and encourage thinking about what strategy you’d submit.
31e0561e-141a-4b08-973b-c8546332e52b
trentmkelly/LessWrong-43k
LessWrong
Consider motivated snobbery Epistemic effort: I wrote this on a plane flight. I'm often interested in Ribbonfarmian "consider a bad thing. What if it's good? (Here's my favorite example of this.) As regards updating my beliefs, I'm drawn to motivated snobbery. "Motivated" means "this belief improves my experiences;" "snobbery" means "with this belief, I eliminate a class of problems other people have." An example of motivated snobbery is "tipping well." Here's my sales pitch: Tipping is an iterated prisoner's dilemma, not an evaluation of their performance! I want servers and bartenders to be happy to see me. A friend said this well: If I'm getting rich we're all getting rich. That this makes bartenders happy and signals to my friends that I'm atypically generous[0] makes this "motivated." That I'm implying, "The extra money I spend doesn't hurt me" signals abundance; I can't be pressured by what pressures others. This is snobbish. Some nonexamples for me are the slogans-as-beliefs you see on hacker news: "Machine learning is just multiplying matrices." "Meta-Language is the real ML" "Once we know how to do something with computers, it ceases to be AI, and becomes boring." Though snobs say these things, I don't because I'm not (right now) trying to hire an engineer who agrees with those things[1]. A reasonable objection: "So zlrth, you'll change your beliefs depending on your circumstances?" In a trivial sense, yes. To be a motivated snob you must keep your identity small and avoid mind-killing. Those things suggest not being an ideologue[2]. I don't think avoiding mind-killing is controversial (at lesswrong, anyway), but if you do, you should let me know. What I think is more up for debate is what heuristics you use to decide your rent-paying beliefs, and their end goals. Another objection: Things HN'ers say are mostly phatic; not beliefs-that-are-supposed-to-pay-rent. Asked, "What kind of thing is, 'Meta-Language is the real ML'?" more people would say that it's an opinion than a belie
387df29e-be14-4288-94c4-14b19a6b9ea0
trentmkelly/LessWrong-43k
LessWrong
Introducing the ML Safety Scholars Program Program Overview The Machine Learning Safety Scholars program is a paid, 9-week summer program designed to help undergraduate students gain skills in machine learning with the aim of using those skills for empirical AI safety research in the future. Apply for the program here by May 31st. The course will have three main parts: * Machine learning, with lectures and assignments from MIT * Deep learning, with lectures and assignments from the University of Michigan, NYU, and Hugging Face * ML safety, with lectures and assignments produced by Dan Hendrycks at UC Berkeley The first two sections are based on public materials, and we plan to make the ML safety course publicly available soon as well. The purpose of this program is not to provide proprietary lessons but to better facilitate learning: * The program will have a Slack, regular office hours, and active support available for all Scholars. We hope that this will provide useful feedback over and above what’s possible with self-studying. * The program will have designated “work hours” where students will cowork and meet each other. We hope this will provide motivation and accountability, which can be hard to get while self-studying. * We will pay Scholars a $4,500 stipend upon completion of the program. This is comparable to undergraduate research roles and will hopefully provide more people with the opportunity to study ML. MLSS will be fully remote, so participants will be able to do it from wherever they’re located.  Why have this program? Much of AI safety research currently focuses on existing machine learning systems, so it’s necessary to understand the fundamentals of machine learning to be able to make contributions. While many students learn these fundamentals in their university courses, some might be interested in learning them on their own, perhaps because they have time over the summer or their university courses are badly timed. In addition, we don’t think that any university currently dev
c4c5db78-33bb-4d14-a4f1-35b83ce774ea
trentmkelly/LessWrong-43k
LessWrong
Pseudorandomness contest: prizes, results, and analysis (Previously in this series: Round 1, Round 2) In December I ran a pseudorandomness contest. Here’s how it worked: * In Round 1, participants were invited to submit 150-bit strings of their own devising. They had 10 minutes to write down their string while using nothing but their own minds. I received 62 submissions. * I then used a computer to generate 62 random 150-bit strings, and put all 124 strings in a random order. In Round 2, participants had to figure out which strings were human-generated (I’m going to call these strings fake from now on) and which were “truly” random (I’m going to call these real). In particular, I asked for probabilities that each string was real, so participants could express their confidence rather than guessing “real” or “fake” for each string. I received 27 submissions for Round 2. This post is long because there are lots of fascinating things to talk about. So, feel free to skip around to whichever sections you find most interesting; I’ve done my best to give descriptive labels. But first:   Prizes Round 1 Thank you to the 62 of you who submitted strings in Round 1! Your strings were scored by the average probability of being real assigned by Round 2 participants, weighted by their Round 2 score. (Entries with negative Round 2 scores received no weight). The top three scores in Round 1 were: 1. Jenny Kaufmann, with a score of 69.4%. That is, even though Jenny’s string was fake, Round 2 participants on average gave her string a 69.4% chance of being real. For winning Round 1, Jenny was given the opportunity to allocate $50 to charity, which she chose to give to the GiveWell Maximum Impact Fund. 2. Reed Jacobs, with a score of 68.8%. Reed allocated $25 to Canada/USA Mathcamp. 3. Eric Fletcher, with a score of 68.6%. Eric allocated $25 to the Poor People’s Campaign. Congratulations to Jenny, Reed, and Eric!   Round 2 A big thanks to the 27 of you (well, 28 — 26 plus a team of two) who submitted Round 2 entries. I estim
de285022-7f68-4788-a90d-abe80893ff68
trentmkelly/LessWrong-43k
LessWrong
Why do many people who care about AI Safety not clearly endorse PauseAI? tl;dr: From my current understanding, one of the following two things should be happening and I would like to understand why it doesn’t: Either 1. Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI. Or 2. If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.   Pausing AI There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.   I am aware that many people interested in AI Safety do not want to prevent AGI from being built EVER, mostly based on transhumanist or longtermist reasoning. Many people in AI Safety seem to be on board with the goal of “pausing AI”, including, for example, Eliezer Yudkowsky and the Future of Life Institute. Neither of them is saying “support PauseAI!”. Why is that? One possibility I could imagine: Could it be advantageous to hide “maybe we should slow down on AI” in the depths of your writing instead of shouting “Pause AI! Refer to [organization] to learn more!”?   Another possibility is that the majority opinion is actually something like “AI progress shouldn’t be slowed down” or “we can do better than lobbying for a pause” or something else I am missing. This would explain why people neither support PauseAI nor see this as a problem to be addressed. Even if you believe there is a better, more complicated way out of AI existential risk, the pausing AI approach is still a useful baseline: Whatever your plan is, it should be better than pausing AI and it should not have bigger downsides than pausing AI has. There should be legible arguments and a broad consensus that your plan is better than pausing AI. Developing the ability to pause AI is also an important fallback option in case other approaches fail. PauseAI calls this “Building the Pause Button”: > Some argue that it’s too early to press the Pause Button (we do
fc2d3c21-3f82-4a14-a5a5-60d1b90f8408
trentmkelly/LessWrong-43k
LessWrong
Chapter 5: The Fundamental Attribution Error J. K. Rowling is staring at you. Can you feel her eyes on you? She's reading your mind using her Rowling Rays. ---------------------------------------- "It would've required a supernatural intervention for him to have your morality given his environment." ---------------------------------------- The Moke Shop was a quaint little shop (some might even say cute) ensconced behind a vegetable stall that was behind a magical glove shop that was on an alleyway off a side street of Diagon Alley. Disappointingly, the shopkeeper was not a wizened ancient crone; just a nervous-looking young woman wearing faded yellow robes. Right now she was holding out a Moke Super Pouch QX31, whose selling point was that it had a Widening Lip as well as an Undetectable Extension Charm: you could actually fit big things in it, though the total volume was still limited. Harry had insisted on coming here straight away, first thing - insisted as hard as he thought he could without making Professor McGonagall suspicious. Harry had something he needed to put into the pouch as soon as possible. It wasn't the bag of Galleons that Professor McGonagall had allowed him to withdraw from Gringotts. It was all the other Galleons that Harry had surreptitiously shoved into his pocket after falling into a heap of gold coins. That had been a real accident, but Harry was never one to discard an opportunity... though it'd really been more of a spur-of-the-moment thing. Ever since Harry had been awkwardly carrying the allowed bag of Galleons next to his trouser pocket, so that any jingling would seem to come from the right place. This still left the question of how he was actually going to get the other coins into the pouch without getting caught. The golden coins might have been his, but they were still stolen - self-stolen? Auto-thieved? Harry looked up from the Moke Super Pouch QX31 on the counter in front of him. "Can I try this for a bit? To make sure it works, um, reliably?" He widened his eyes in
5b994ac7-b018-4073-9dc4-7f98635caa76
trentmkelly/LessWrong-43k
LessWrong
Current themes in mechanistic interpretability research This post gives an overview of discussions - from the perspective and understanding of the interpretability team at Conjecture - between mechanistic interpretability researchers from various organizations including Conjecture, Anthropic, Redwood Research, OpenAI, and DeepMind as well as some independent researchers. It is not a review of past work, nor a research agenda. We're thankful for comments and contributions from Neel Nanda, Tristan Hume, Chris Olah, Ryan Greenblatt, William Saunders, and other anonymous contributors to this post, which greatly improved its quality. While the post is a summary of discussions with many researchers and received comments and contributions from several, it may nevertheless not accurately represent their views.    The last two to three years have seen a surge in interest in mechanistic interpretability as a potential path to AGI safety. Now there are no fewer than five organizations working on the topic (Anthropic, Conjecture, DeepMind, OpenAI, Redwood Research) in addition to numerous academic and independent researchers.  In discussions about mechanistic interpretability between a subset of researchers, several themes emerged. By summarizing these themes here, we hope to facilitate research in the field more broadly.  We identify groups of themes that concern: 1. Object-level research topics in mechanistic interpretability 2. Research practices and tools in mechanistic interpretability 3. Field building and research coordination in mechanistic interpretability 4. Theories of impact for mechanistic interpretability Object-level research topics in mechanistic interpretability Solving superposition Anthropic’s recent article on Toy Model of Superposition laid out a compelling case that superposition is a real phenomenon in neural networks. Superposition appears to be one of the reasons that polysemanticity happens, which makes mechanistic interpretability very difficult because it prevents us from telling simple stori
e27bde26-fb8c-455c-8c07-23eeede06265
trentmkelly/LessWrong-43k
LessWrong
1hr talk: Intro to AGI safety This is an hour-long talk (actually ~45 minutes plus questions) that I gave on AGI safety technical research, for an audience of people in STEM but with no prior knowledge of the field. My goal was to cover all the basic questions that someone might have when considering whether to work in the field: motivation, basic principles, a brief survey of the different kinds of things that people in the field are currently working on, and resources for people who want to learn more. I couldn't find an existing talk covering all that, so I wrote my own. In the interest of having good educational and outreach material available to the community, I’m happy for anyone to copy any or all of this talk. The actual slides are in Powerpoint format here, and below you'll find screenshots of the slides along with the transcript. Abstract: Sooner or later—no one knows when—we'll be able to make Artificial General Intelligence (AGI) that dramatically outperforms humans at virtually any cognitive task. If, at that point, we are still training AIs the way we do today, our AGIs may accidentally spin out of control, causing catastrophe (even human extinction) for reasons explained by Stuart Russell, Nick Bostrom, and others. We need better paradigms, and a small but growing subfield of CS is working on them. This is an interdisciplinary field involving machine learning, math, cybersecurity, cognitive science, formal verification, logic, interface design, and more. I'll give a summary of motivation (why now?), foundational principles, ongoing work, and how to get involved. Opening Thanks for inviting me! I had an easy and uneventful trip to get here, and we all know what that means ... Anyone? ... What it means is that evil robots from the future did not travel back in time to stop me from giving this talk. [Laughter.] And there’s an important lesson here! The lesson is: The Terminator movie is the wrong movie to think about when we talk about safety for advanced AI systems. The right
4c917004-5354-4da3-8ae6-06acb9703bb9
trentmkelly/LessWrong-43k
LessWrong
Which questions about online classes would you ask Peter Norvig? A week ago Google launched an open source project called Course Builder it packages the software and technology used to build their July Class Power Searching with Google. The discussion forum for it is here. Tomorrow is the first live hangout where he will be answering questions about MOOC design and technical aspects of using Course Builder. The live hangout will is scheduled for the 26th of September. > Helping the World to Teach > Posted by Peter Norvig, Director of Research > > In July, Research at Google ran a large open online course, Power Searching with Google, taught by search expert, Dan Russell. The course was successful, with 155,000 registered students. Through this experiment, we learned that Google technologies can help bring education to a global audience. So we packaged up the technology we used to build Power Searching and are providing it as an open source project called Course Builder. We want to make this technology available so that others can experiment with online learning. > > The Course Builder open source project is an experimental early step for us in the world of online education. It is a snapshot of an approach we found useful and an indication of our future direction. We hope to continue development along these lines, but we wanted to make this limited code base available now, to see what early adopters will do with it, and to explore the future of learning technology. We will be hosting a community building event in the upcoming months to help more people get started using this software. edX shares in the open source vision for online learning platforms, and Google and the edX team are in discussions about open standards and technology sharing for course platforms. > > We are excited that Stanford University, Indiana University, UC San Diego, Saylor.org, LearningByGivingFoundation.org, Swiss Federal Institute of Technology in Lausanne (EPFL), and a group of universities in Spain led by Universia, CRUE, and Banco Santander-Univer
c68f1a1c-44d4-4be3-932a-8d8fd4548b21
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI alignment research links *This was cross-posted by the Forum team after the time that it was published.*   A couple of organizations I’m involved with have recently put out some cool papers relevant to the AI alignment problem, which I’ve [**emphasized the importance of**](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#open-question-how-hard-is-the-alignment-problem) for how the [**most important century**](https://www.cold-takes.com/most-important-century/) might go. I think some readers will find this post too weedsy, but I get a lot of questions like “So what are people literally actually doing to reduce AI risk?” and this post has some answers. (That said, you should not read this as a "comprehensive survey of AI alignment research," even of recent research. Consistent with other "link posts" on this blog, this is me sharing stuff I've come across and read, and in this case I've read the papers partly because of personal connections to the organizations.) **Eliciting Latent Knowledge** ------------------------------ [**Alignment Research Center (ARC)**](https://alignmentresearchcenter.org/), a two-person organization I’m on the board of, released a paper on [**Eliciting Latent Knowledge (ELK)**](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge), which I’d summarize as “trying to think of theoretically robust ways to train an AI such that it will ‘tell the truth,’ even in a sort of worst-case situation where it would be easy by default for it to fool humans.” The paper largely takes the form of a “game” between a “builder” (who proposes training strategies that might work) and a “breaker” (who thinks of ways the strategies could fail), and ARC is offering [**cash prizes**](https://www.alignmentforum.org/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals) for people who can come up with further “builder” moves. The heart of the challenge is the possibility that when one tries to train an AI by [**trial-and-error**](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta) on answering questions from humans - with "success" being defined as "its answers match the ones the human judges (possibly with AI assistance) think are right" - the most simple, natural way for the AI to learn this task is to learn to (a) *answer a question as a human judge would answer it*, rather than (b) *answering a question truthfully.* * (a) and (b) are the same as long as humans can understand everything going on (as in any tests we might run); * (a) and (b) come apart when humans can't understand what's going on (as might happen once AIs are taking lots of actions in the world). It's not clear how relevant this issue will turn out to be in practice; what I find worrying is that this seems like just the sort of problem that could be *hard to notice* (or fix) via experimentation and direct observation, since an AI that learns to do (a) could pass lots of tests while not in fact being truthful when it matters. (My description here is oversimplified; there are a lot more wrinkles in the writeup.) Some of the "builder" proposals try to think about ways that "telling the truth" and "answering as a human judge would answer" might have differently structured calculations, so that we can find ways to reward the former over the latter. This is a theory paper, and I thought it'd be worth sharing a side note on its [**general methodology**](https://ai-alignment.com/my-research-methodology-b94f2751cb2c) as well. * One of my big concerns about AI alignment theory is that there are no natural feedback loops for knowing whether an insight is important (due to how embryonic the field is, there isn't even much in the way of interpersonal mentorship and feedback). Hence, it seems inherently very easy to spend years making "fake progress" (writing down seemingly-important insights). * ARC recognizes this problem, and focuses its theory work on "worst case" analysis partly because this somewhat increases the "speed of iteration": an idea is considered failed if the researchers can think of any way for it to fail, so lots of ideas get considered and quickly rejected. This way, there are (relatively - still not absolutely) clear goals, and an expectation of daily progress in the form of concrete proposals and counterexamples. **I wrote a** [**piece on the Effective Altruism forum**](https://forum.effectivealtruism.org/posts/Q2BJnpNh8e6RAWFnm/consider-trying-the-elk-contest-i-am) **pitching people (especially technical people) on spending some time on the contest, even if they think they’re very unlikely to win a prize or get hired by ARC.** I argue that this contest represents an unusual opportunity to get one’s head into an esoteric, nascent, potentially crucial area of AI research, without needing any background in AI alignment (though I expect most people who can follow this to have some general technical background and basic familiarity with machine learning). If you know people who might be interested, please send this along! And if you want more info about the contest or the ELK problem, you can check out my [**full post**](https://forum.effectivealtruism.org/posts/Q2BJnpNh8e6RAWFnm/consider-trying-the-elk-contest-i-am), the [**contest announcement**](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals) or the [**full writeup on ELK**](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit). **Training language models to be “helpful, honest and harmless”** ----------------------------------------------------------------- [**Anthropic**](https://www.anthropic.com/), an AI lab that my wife Daniela co-founded, has published a [**paper**](https://arxiv.org/pdf/2112.00861.pdf) with experimental results from training a large language model to be helpful, honest and harmless. A “language model” is a large AI model that has essentially[[1]](#fnm1a2myprn2e) been trained exclusively on the task, **“Predict the next word in a string of words, based on the previous words.”** It has done lots of [**trial-and-error**](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta) at this sort of prediction, essentially by going through a huge amount of public online text and trying to predict the next word after each set of previous words. From this simple (though data- and compute-hungry) process has emerged an AI that can do a lot of interesting things in response to different prompts - including acting as a chatbot, answering questions from humans, writing stories and articles in the style of particular authors, writing working code based on English-language descriptions of what the code should do, and, er, [**acting as a therapist**](https://twitter.com/nickcammarata/status/1283300424418619393). There’s a nice collection of links to the various capabilities GPT-3 has displayed [**here**](https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results). By default, this sort of AI tends to make pretty frequent statements that are false/misleading and/or toxic (after all, it’s been trained on the Internet). This paper examines some basic starting points for correcting that issue. Over time, I expect that AI systems will get more powerful and their "unintended behaviors" more problematic. I consider work like this relevant to the broader challenge of “training an AI to reliably act in accordance with vaguely-defined human preferences, and avoid unintended behaviors." (As opposed to e.g. "training an AI to succeed at a well-defined task," which is how I'd broadly describe most AI research today.) The simplest approach the paper takes is “prompting”: giving the language model an “example dialogue” between two humans before asking it any questions. When the language model “talks,” it is in some sense spitting out the words it thinks are “most likely to come next, given the words that came before”; so when the “words that came before” include a dialogue between two helpful, honest, harmless seeming people, it picks up cues from this.[[2]](#fn5fgbv2abiis)   > We provide a long prompt (4600 words from 14 fictional conversations) with example interactions. The prompt we used was not carefully designed or optimized for performance on evaluations; rather it was just written by two of us in an ad hoc manner prior to the construction of any evaluations. Despite the fact that our prompt did not include any examples where models resisted manipulation, refused requests to aid in dangerous activities, or took a stand against unsavory behavior, we observed that models often actively avoided engaging in harmful behaviors based only on the AI ‘personality’ imbued by the prompt. > >   Something I find interesting here: * An [**earlier paper**](https://arxiv.org/abs/2109.07958) demonstrated a case where *larger* (“smarter”) language models are *less truthful*, apparently because they are better at finding answers to questions that will mimic widespread misconceptions. * The Anthropic paper reproduces this effect, but finds that the simple “prompting” technique above gets rid of it:   ![](https://res.cloudinary.com/cea/image/upload/v1672776547/mirroredImages/3dnmsNxaZqxZEjyw9/wgirpmoatgomyya1uvzd.png) “Number of parameters” indicates the “size” of the model - larger models are generally considered “smarter” in some sense. “LM” = “Language model,” “LM+prompt” is a language model with the “prompting” intervention described above, and don’t worry about the green line.   (As [**noted by Geoffrey Irving**](https://twitter.com/geoffreyirving/status/1468621284460441607), a very similar result appears in a [**Deepmind paper**](https://storage.googleapis.com/deepmind-media/research/language-research/Training%20Gopher.pdf) that came out around the same time. Reminder that this post isn't a comprehensive survey!) The paper examines a few other techniques for improving the “helpful, honest and harmless” behavior of language models. It presents all of this as a basic starting point - establishing basic “benchmarks” for future work to try to improve on. This is very early-stage work, and a lot more needs to be done! **Understand the mechanics of what a neural network is doing** -------------------------------------------------------------- Anthropic also published [**Transformer Circuits**](https://transformer-circuits.pub/), first in a series of papers that represents a direct attempt to address a problem I outline [**here**](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#misaligned-ai-mysterious-potentially-dangerous-objectives): modern neural networks are *very* “black-box-like.” They are trained by [**trial-and-error**](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta), and by default we end up with systems that can do impressive things - but we have very little idea *how* they are doing them, or “what they are thinking.” Transformer Circuits is doing something you might think of as "digital neuroscience." It examines a simplified language model, and essentially uses detective work on the model's “digital brain” to figure out which mathematical operations it’s performing to carry out key behaviors, such as: “When trying to figure out what word comes after the current word, look at what word came after *previous* instances of the current word.” This is something we could've guessed the model was doing, but Transformer Circuits has tried to pin the behavior down to the point where you can follow the "digital brain" carrying out the operations to do it. It's hard to say more than that in a layperson-friendly context, since a lot of the paper is about the mechanical/mathematical details of how the "digital brain" works. But don't take the lower word count as lower excitement - I think this series of papers is some of the most important work going on in AI research right now. [**This piece by Evan Hubinger**](https://www.alignmentforum.org/posts/2269iGRnWruLHsZ5r/transformer-circuits) gives a brief technical-ish summary of what’s exciting, as well. Breaking down and starting to understand the “black box” of how AI models are "thinking" is a lot of work, even for simplified systems, but it seems like essential work to me if these sorts of systems are going to become more powerful and integrated into the economy. 1. **[^](#fnrefm1a2myprn2e)**This is of course a simplification. 2. **[^](#fnref5fgbv2abiis)**"Prompting" is different from "training." The AI has been "trained" on huge amounts of Internet content to pick up the general skill: "When prompted with some words, predict which words come next." The intervention discussed here is adding to the words it is "prompted with" that are giving its already-trained prediction algorithm clues about what comes next.
af0de921-6bd1-4a63-8059-bed7fa30b46f
StampyAI/alignment-research-dataset/blogs
Blogs
David Cook on the VV&A process ![Emil Vassev portrait](http://intelligence.org/wp-content/uploads/2014/03/David_A_Cook_w678.jpg)Dr. David A. Cook is Associate Professor of Computer Science at [Stephen F. Austin State University](http://www.sfasu.edu/), where he [teaches](https://intelligence.org/feed/<a href=) Software Engineering, Modeling and Simulation, and Enterprise Security. Prior to this, he was Senior Research Scientist and Principal Member of the Technical Staff at [AEgis Technologies](http://www.aegistg.com/), working as a Verification, Validation, and Accreditation agent supporting the Airborne Laser. Dr. Cook has over 40 years’ experience in software development and management. He was an associate professor and department research director at [USAF Academy](http://www.usafa.af.mil/) and former deputy department head of [Software Professional Development Program](http://www.afit.edu/ls/spdp_list.cfm) at [AFIT](http://www.afit.edu/). He has been a consultant for the Software Technology Support Center, Hill AFB, UT for 19 years. Dr. Cook has a Ph.D. in Computer Science from [Texas A&M University](http://www.tamu.edu), is a Team Chair for [ABET](http://www.abet.org/), Past President for the [Society for Computer Simulation, International](http://www.scs.org/), and Chair of [ACM SIGAda](http://www.sigada.org/). **Luke Muehlhauser**: In various articles and talks (e.g. [Cook 2006](http://www.ieee-stc.org/proceedings/2006/pdfs/DC1252.pdf)), you’ve discussed the software verification, validation, and accreditation (VV&A) process. Though the general process is used widely, the VV&A term is often used when discussing projects governed by [DoD 5000.61](http://www.dtic.mil/whs/directives/corres/pdf/500061p.pdf). Can you explain to whom DoD 5000.61 applies, and how it is used in practice? --- **David A Cook**: DOD 5000.81 applies to all Department of Defense activities involving modeling and simulation.  For all practical purposes, it applies to all models and simulations that are used by the DOD.  This implies that it also applies to all models and simulations created by civilian contractors that are used for DOD purposes. The purpose of the directive, aside from specifying who is the “accreditation authority” (more on this later) is to require Verification and Validation for all models and simulation, and then also to require that each model and simulation by accredited for its intended use.  This is the critical part, as verification and validation has almost universally been a part of software development within the DOD.  Verification asks the question “Are we building the system in a quality manner?”, or “are we building the system right?”.  Verification, in a model (and the resulting execution of the model providing a simulation) goes a bit further – and asks the question “Does the model build and the results of the simulation actually represent the conceptual design and specifications of the system we built?”  The difference is that in a model and simulation, you have to show that your design and specifications of the system you envision are correctly translated into code, and that the data provided to the code also matches specification. Validation asks the question “are we building a system that meets the users’ actual needs?”, or “are we building the right system?”  Again, the verification of a model and resulting simulation is a bit more complex than non-M&S ”verification”.  In modeling and simulation, verification has to show that the model and the simulation both accurately represent the “real world” from the perspective of the intended use. These two activities are extremely difficult when you are building models and providing simulation results for notional systems that might not actually exist in the real world.  For example, it would be difficult to provide V&V for a manned Mars mission, because, in the real world, there is not a manned Mars lander yet!   Therefore, for notional systems, V&V might require estimation and guesswork.  However, guesswork and estimation might be the best you can do! 5000.61 further requires that there be an ultimate authority, the “accreditation authority”, that is willing to say “based on the Verification and Validation performed on this model, I certify that it provides answers that are acceptable for its intended use”.  Again, if you are building a notional system, this requires experts to say “These are guesses, but they are the best guesses available, and the system is as close a model to the real world as possible.  We accredit this system to provide simulation results that are acceptable.”   If, for example, an accredited  simulation shows that a new proposed airplane would be able to carry 100,000 pounds of payload – but the result airplane, once built, can only carry 5,000 pounds – the accreditation authority would certainly bear some of the blame for the problem. In practice, there are process for providing VV&A.  Military Standard  3022 provides a standard template for recording VV&A activities, and many DOD agencies have their own VV&A repository where common models and simulation VV&A artifacts (and associated documentation) are kept. There are literally hundreds of ways to verify and validate a model (and it’s associated simulation execution).  The V&V “agents” (who have been tasked with performing V&V) provide a recommendation to the Accreditation Authority, listing what are acceptable uses, and (the critical part) the limits of the model and simulation.  For example, a model and simulation might provide an accurate representation of the propagation of a laser beam (in the upper atmosphere) during daylight hours, but not be a valid simulation at night, due to temperature-related atmospheric propagation.  The same model and simulation might be a valid predictor of a laser bouncing off of a “flat surface”, but not bouncing off of uneven terrain. --- **Luke**: Roughly how many accreditation authorities are there for such projects? Do accreditation authorities tend to specialize in accrediting V&V in certain domains — e.g. some for computer software, some for airplanes, etc.? Are there accreditation authorities that the DoD doesn’t recognize as “legitimate”? --- **David**: Accreditation authorities are simply the component heads who sign a letter saying “Model and Simulation X is approved for the following purposes”.  The letter then states what the intended uses are, lists any special conditions, and lists the limitations of the model and the simulation.  The accreditation authority is more of a position rather than a person.  It can be a person (usually the head of the organization), or a committee. Each DOD agency is responsible for models and simulations that it develops or uses – they must either VV&A their own models and simulations, or use models and simulations (from a trusted source) that has performed their own VV&A.  Note however, that each DOD agency probably has their own data – which must also be be accredited.  Every project has it’s own M&S, and probably has domain experts who perform the VV&A.  Rather than go to the top for accreditation, each project probably has been delegated the authority to perform its own VV&A. There are no non-legimate accreditation authorities per se; accreditation authorities are not authorized based on knowledge, simply on position.  However it is assumed that each M&S area has domain experts who have the specialized knowledge in the application area to perform reliable VV&A.  These domain experts span many areas – application domain experts (who might, for example, be an expert on a laser beam), coding domain experts (who can verify that the code is a good representation of the requirements), data domain experts (who verify that the targeting data represents a valid target), and perhaps many others.  Typically, each project has a VV&A team or “agent” who perform the V&V, and recommend accreditation (usually in a formal letter) that restates the intended uses and limitations,  The recommendation includes all associated artifacts, such as test results, reviews, reports of individual Verification and Validation activities, other models and simulation used to compare against, real-life data (to show validity), and possibly many other items. If a particular DOD agency is using a model and simulation in its exercises, it is responsible for VV&A of its own M&S.  If, on the other hand, an allied agency is using a model that includes artifacts from an another agency – the outside agency is responsible for working to make sure that the model, simulation and data repressing them is valid.  In essence, each DOD agency is responsible to other other DOD component to ensure that their forces and capabilities are appropriately represented to all outside agencies utilizing models and simulation that involve them. --- **Luke**: Are there historical cases in which a model or simulation completed the VV&A process, but then failed to perform as it should have given the accreditation, and then the accreditation authority was held to account for their failure? If so, could you give an example? (Preferably but not necessarily in software.) --- **David**: Because I worked as a consultant on many modeling and simulation projects, I am ethically prevented from discussing actual failures that I know about – mainly because most of the projects I worked on were classified, and I signed non-disclosure agreements. However, by shifting into hypothetical scenarios, there are several stories that I can use that best illustrate this. One is a story taught in many simulation classes – and I only have secondhand knowledge of it. The other two are ancedotal – but good lessons! In the first instance, a model was used which predicted “safe runway distance” for an airplane. Feed into the model the weight, altitude, temperature, and humidity, and run the simulation to predict how much runway was needed. Unfortunately, the day the model was used, it took several hours for the airplane to actually takeoff. It had a bit more fuel that estimated – adding weight. By takeoff time, the temperature had risen, giving “stickier” tires and runway, and decreasing air density (giving less lift) Also – the humidity had changed, also affecting lift characteristics. The model did not have a large allowance for error (it tried to give a relatively precise answer) – and with all the factors changing, the airplane went from “enough runway” to “marginal” after the simulation had been run. Combined with a relatively inexperienced pilot (who did not advance the throttle fast enough) – and the airplane overshot the end of the runway. Not much damage (other than a bruised ego) – but the simulation – while accurate, was not used properly. The other two stories are certainly imaginary – but are passed around like legend in our field. In the first story, in the early days of the Airborne Laser, a very simple model was used to predict laser propagation. Code was reused to model the laser – basically, code for a missile, with the speed of the missile increased to the speed of light. The targeting acquisition, target recognition, etc. were all similar, and once fired, the simulation would show if the target was hit. Until the first time they ran it, halfway to the target, the laser beam “ran out of fuel” and fell into the sea. The second (certainly imaginary) story involves modeling a battle scenario for the Australian Air Force – using helicopters. One of the problems with landing a helicopter was making sure it had a clear landing field – and kangaroos were a problem. So – the developers of the battle simulation, who used object-oriented development, took some code which was basically used to model a ground soldier and modified its behavior to “run at the sound of helicopters”. They then changed its appearance on the simulation to show a small image of a kangaroo. When to model was executed, the simulation showed the kangaroos running away from the helicopter. Until it landed, and then the kangaroos reversed direction, and attacked the helicopter with rifles! Ok – the last two examples are cute and funny, but show the problems with invalid assumptions and imperfect data. I leave this question with a quote that I always use in my M&S classes – “All models are inaccurate. Some models are useful anyway.” It is extremely difficult to model the real world totally inside of code. I don’t care how well you model a hot tub in a simulation – it is NOT really a hot tub. There are always things you do not consider, data that is not perfect, or constraints that you miss. A model is a best-guess approximation of what will happen in the “ dal world” – but it is NOT the “real world”. All models have limitations. In spite of that. the resulting model and the simulation still give useful data. The accreditation authority is simply acknowledging that the model and simulation are useful “for their intended use”, and that “limitations exist”. No reputable modeling and simulation expert (nor any accreditation authority) trusts a single model and its resulting simulation to produce data that is used in life-or-death decisions. Multiple sources of validity are required, multiple independently-developed models and simulations are used, and domain experts are consulted to see if the results “feel right”. And tolerances must always be given. An aircraft might encounter a puddle of water when trying to takeoff. It might hit a bird. Both of these decrease speed, requiring longer takeoff distance. It’s hard to model unforeseen circumstances. If you include a “fudge factor” – how much “fudge factor” is correct? Before an accreditation authority accepts a model or simulation as reliable, many, many steps must be taken to make sure that it produces credible results, and equally as many steps must be taken to make sure that the limitations of the model and simulation are listed and observed before accepting the result of the simulation as valid. --- **Luke**: How did the VV&A process develop at DoD? When did it develop? Presumably it developed in one or more domains first, and then spread to become a more universal expectation? --- **David**: Interesting question. Before we can discuss VV&A, we have to take a slight detour through the history of M&S. And I need to tie several threads of though together. VV&A is, of course, tied to the use of models and simulations. To be honest, the VV&A of models goes back to the civil war (and probably earlier) – when mathematical models were used to predict firing data (given desired range, here was amount of powder and elevation required. Obviously – the models needed a lot of V&V. However, all it too to V&V the model was to load a cannon and fire it. Not a complex process. The accreditation part was implicit – the Secretary of War used to “authorize” the data to be printed. To really need VV&A, however, complex simulations were needed – and it took computing power to achieve complex M&S. Over the years, modeling became more and more important, as models and simulation were used for problems that could not easily be solved by traditional mathematical methods. To quote from [Wikipedia article](http://en.wikipedia.org/wiki/Computer_simulation), > Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible. > > VV&A really did not become a serious issue until models and simulations were computerized, and computers did not become available until the late 1940s. Starting in the late 1940s, both digital and analog computers became available. However, very few (if any) engineers were trained on how to use this newly-developed computing power. There are various stories of how modeling and simulation became a powerful force in the DOD, but the story I have personal knowledge of is the story of John McLeod – an engineer working at the Naval Air Missile Test Center at Point Mugu on the California coast north of Los Angeles. John was an innovator and after working on analog computers and simulations in the early 1950s, , was John McLeod, who took delivery of a new analog computer sometime in 1952. John was not the only engineer in the aerospace community in Southern California facing the same problems, and a few of them decided to get together as an informal user group to exchange ideas and experiences. To make a long story short, John helped found what became the Society for Computer Simulation (SCS). This organization, over the years, has had members who were leaders and innovators in the field of modeling, simulation, and VV&A. [Note that I had the privilege to be the President of SCS from 2011 – 2012, so I am a bit biased]. The SCS has, to this day, the McLeod award to commemorate the advances John McLeod made in the M&S arena. It is only awarded to those that have made significant contributions to the profession. The SCS published newsletters. M&S conferences were organized. Leaders in the field were able to meet, publish, and share their expertise. All of which help integrate M&S into more and more domains. As a result of leaders in the field being able to share M&S information, and also as a result huge increase in capabilities and availability of computers to run M&S, the need for VV&A also increased. Over the years, modeling and simulation became more and more important in many domains within the DOD. It helped develop fighters (in fact, aircraft of all types). It helped train our astronauts to land on the moon. It modeled the space shuttle. Complex models and simulations helped us model ballistic missile defense, fight war games with minimal expense (and no lives lost!), and design complex weapon systems. In fact, it’s hard to imagine any technologically sophisticated domain that does not use M&S to save money, save time, and ensure safety. But – these increasingly complex models needed verification and validation, and frequently accreditation, So – the proliferation in the use of M&S lead to an increased need for VV&A. M&S became so complex that VV&A could not be accomplished without “domain experts” – usually referred to as “Subject Matter Experts” (SMEs) to help. Increased complexity of the M&S lead to increased complexity of the VV&A. Various elements within the DOD were performing VV&A on their own, with little official coordination. To leverage the experience of various DOD components and multiple domains, the DOD saw the need for a single point of coordination. As a result, in the 1990s, the DOD formed the Defense Modeling and Simulation Office (DMSO). The DMSO served as a single point of coordination for all M&S (and VV&A) efforts within the DOD. One of the best DMSO contributions was the VV&A Recommended Practices Guide (VV&A RPG) – first published in 1996. The guide has been updated several times over the years, reflecting the uncreased importance of VV& in the DOD. In 2008 DMSO was renamed the Modeling and Simulation Coordination Office. The MSCO web site (and the latest version of the VV&A Recommended Practices Guide) can be found at [msco.mil](http://www.msco.mil/) For those of you interested in M&S and VV&A, I cannot recommend the MSCO resource enough. It costs nothing (not even email registration), and contains a vast amount of information about M&S and VV&A. The [RPG Key Concepts document](http://www.msco.mil/documents/Key01_Key_Concepts.pdf) alone contains 34 pages of critical “background” information that you should read before going any further in VV&A. --- **Luke**: In [Cook (2006)](http://www.ieee-stc.org/proceedings/2006/pdfs/DC1252.pdf) you write that one of the reasons V&V is so difficult comes from “having to ‘backtrack’ and fill in blanks long after development.” What does this mean? Can you give an example? --- **David**: Let’s imagine you are designing a new fighter aircraft. It is still on the drawing board, and only plans exist. Rather than spend money building an actual prototype first, you develop mathematical models of the jet to help verify the performance characteristics. You might actually build a very small model of the body – maybe 1/10th size for wind tunnel experiments. You also build computer-based models and execute them to estimate flight characteristics. The wind-tunnel experiences (even though only on a 1/10th size model) will give data that might make you modify or change the computer-based model. This feedback loop consists of “build model – run simulation – examine data – adjust model” and repeat. Eventually, you build a working prototype of the jet. Almost certainly, the actual flight characteristics will not exactly match the computer-based model. The prototype is “real world” – so you have to readjust the computer-based model. The “real-world” prototype is just a prototype – and probably not used for high-speed fighting and turns – but the basic data gathered from the flying of the prototype leads to changes in the computer-based model, which will now be used to predict more about high-speed maneuvering. Back when I worked on the Airborne Laser – we had models that predicted the laser performance before the laser was actually built or fired! The models were based on mathematical principles, on data from other lasers, and from simpler, earlier models that were being improved on. Once a working Airborne Laser was built and fired – we had “real world” data. It was no surprise to find out that the actual characteristics of the laser beam were slightly different that those predicted by the models. For one thing, the models were simplistic – it was impossible to take everything into account. The result was that we took the real-world data, and modified the computer models to permit them to better predict future performance. The bottom line is that the model is *never* finished. Every time you have additional data from the “real world” that is not an exact match to what the model predicts, the model should be examined, and the model adjusted as necessary. There are two terms I like to use for models when it comes to VV&A – “anchoring” and “benchmarking”. If I can get another independently-developed model to predict the same events as my model, I have a source of validation. I refer this as benchmarking. Subject matter experts, other simulations, similar events that lend credence to your model – all improve the validity, and provide benchmarking. Anchoring, on the other hand, is when I tie my model directly to real-world data. As long as the model is being used to predict behavior – it needs to continually be tied or anchored to real-world performance, if possible. If no real-world data is available, then similar models, expert opinions, etc. can be used to also increase the validity. Just a final note. Models can become so engrained in thoughts that they become “real. For example, I remember when the recent Star Trek movie (the 2009 version) came out. A friend of mine said, after viewing the movie, that he had trouble the the bridge of the USS Enterprise. It did not “look real”. I asked what “real” was – and my friend replied “You know, like the REAL USS Enterprise, the NCC 1701 (referring to the original series). Think about it – all are notional and imaginary (sorry, fellow Trekers) – yet he viewed one as “real” and the other as inaccurate. Models – when no real-world artifact exists – have the potential to become “real” in your mind. It’s worth remembering that a model is NOT real, but only an artifact built to resemble or predict what might (or might not) eventually become real one day. --- **Luke**: Do you have a sense of how common [formal verification](http://en.wikipedia.org/wiki/Formal_verification) is for software used in DoD applications? Is formal verification of one kind or another *required* for certain kinds of software projects? (Presumably, DoD also uses much software that is not amenable to formal methods.) --- **David**: I have not worked on any project that uses formal V&V methods. I used to teach the basics of formal methods (using ‘Z’- pronounced Zed) – but it is very time consuming, and not really fit for a lot of project. Formal notation shows the correctness of the algorithm from a mathematical standpoint. For modeling and simulation, however, they do not necessarily help you with accreditation – because the formal methods check the correctness of the code, and not necessarily the correlation of the cod eight real-world data. I have heard that certain extremely critical applications (such as reactor code, and code for the Martian Lander) use formal methods to make sure that the code is correct. However, formal methods take a lot of training and education to use correctly, and it also consumes a lot of time in actual use. Formal methods seldom (never?) speed up the process – they are strictly used to validate the code. From my experience, I have not work on on any project that made any significant use of formal methods – and in fact, I do not have any colleagues that have used formal methods, either. --- **Luke**: Thanks, David! The post [David Cook on the VV&A process](https://intelligence.org/2014/03/07/david-cook/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
60b84349-1e6b-4dff-a08e-726a71c86121
trentmkelly/LessWrong-43k
LessWrong
Rationalist fiction brainstorming funtimes The title should make things clear enough, so let's start with my description of the target, rationalist fiction: fiction that tries to teach the audience rationalist cognitive skills by having characters model those skills for the reader. So for example, Luminosity is to a large extent about the questions "What do I want?, What do I have?, and How can I best use the latter to get the former?"  Oh, and using empiricism on magic. Another example is Harry Potter and the Methods of Rationality, which goes more in-depth about the laundry list of human biases. In fact, many of the more iconic moments (measured by what I remember and what other people like to copy) are about biases to avoid, rather than about modeling good behavior.   This thread is about ideas, from general to specific, for rationalist fiction. I'll give some obvious examples. General idea: having a rational character encountering magic or amazing technology is a great chance to showcase the power of empiricism.  (Has anyone gotten on this one yet? :3 ) Story idea: Okay, so we take the Dresden Files universe, and our rational protagonist is some smart kid who just started a summer job as an assistant radio technician or something. It turns out he's got one in a hundred magical talent, enough to cut off his budding career, he manages to find the magic community, figures out just enough, embarks on heroic quest to run a magitech radio station. (Okay, this last bit isn't obvious - for one, more character development would probably have him wanting something else.  For another, the obvious thing is to take over the world if Luminosity and HPMOR are anything to go by.) Specific idea: A character could model the skill of testing stuff by testing stuff.  When characters are performing a big search, have someone actually stop to think about false positives, or more generally "how could things be going wrong, and how can I prevent that?", and have it actually be a false positive once.   But really, there
07a6facc-e3b5-40fb-9ff8-169d4684b730
trentmkelly/LessWrong-43k
LessWrong
Baby born from cryo-preserved embryo Apparently embryos produced by in vitro fertilization routinely stay on ice for years. Article here.
76aefd55-ec66-4522-95b0-f1ad1d9c5cfa
trentmkelly/LessWrong-43k
LessWrong
What would 10x or 100x better than CFAR look like? If you were creating a rationality-increasing org, how would you design it? Focus on whatever properties you care about.
244dec29-7dd1-48e1-81cd-de8e75493f32
trentmkelly/LessWrong-43k
LessWrong
Relaxation-Based Search, From Everyday Life To Unfamiliar Territory Suppose I’m making plans to visit my parents for the holidays. I go online, and I book plane tickets. I do not worry about how I’m going to get to the airport - indeed, that question probably wouldn’t even enter my mind until a few days before the flight. Why not? Well, without thinking through any particular plan, I expect getting to the airport to be easy. There are many ways to get there (train, Uber/Lyft, drive & park) all of which I’ve used before and any of which would be fine. This is relaxation-based search. I have a planning problem: visit my parents for the holidays. I’m searching for a plan which satisfies all my constraints: arrive at my parents’ house ~2-3 days before Christmas, arrive at the airport an hour before the plane takes off, don’t spend lots of time en route, don’t spend too much money, and many many other constraints (most of which are implicit in my head). In order to search efficiently, I relax the problem: I ignore many constraints which I expect will be easy to satisfy, like arriving at the airport on time or packing my bag. I throw out all but the hardest-to-satisfy constraints, which in this case mostly involve the plane tickets. I solve the “relaxed” search problem, with just the hardest constraints, then go back and tweak the plan to handle other constraints later. Why This Matters: High-Dimensional Worlds Our world is high-dimensional. Even a trip to visit my parents for the holidays involves hundreds of choices: dates, modes of transport, packing list, how early to arrive at the airport, etc. Multiply them all out, and we have an exponentially vast space of possibilities. And that’s just a trip to visit my parents. Imagine the possibilities involved in designing a website or a scientific experiment! When searching in spaces this large, we cannot afford to test every possibility. We are forced to rule out vast swaths of the search space without even looking at them. Relaxation-based search is one of the most general algorithms
4dd8a26b-58cc-4359-84d4-39d15fa5ca6e
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA: Blindsight Discussion article for the meetup : West LA: Blindsight WHEN: 12 November 2014 07:30:00PM (-0800) WHERE: 11066 Santa Monica Blvd, Los Angeles, CA How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: We will be discussing the psychological/philosophical issues raised in the novel Blindsight. For those who have not read the book, an introduction will be given. So, you do not need to read the book to attend; however, there will be spoilers. Recommended Reading: * Blindsight, free on the web (and in PDF) and not too long if you'd like to read it before the meetup. No prior exposure to Less Wrong is required; this will be generally accessible. Discussion article for the meetup : West LA: Blindsight
ef27bb14-47e7-4de1-997e-57acf5d6c21e
trentmkelly/LessWrong-43k
LessWrong
Common mistakes people make when thinking about decision theory From my experience reading and talking about decision theory on LW, it seems that many of the unproductive comments in these discussions can be attributed to a handful of common mistakes. MISTAKE #1: ARGUING ABOUT ASSUMPTIONS The main reason why I took so long to understand Newcomb's Problem and Counterfactual Mugging was my insistence on denying the assumptions behind these puzzles. I could have saved months if I'd just said to myself, okay, is this direction of inquiry interesting when taken on its own terms? Many assumptions seemed to be divorced from real life at first. People dismissed the study of electromagnetism as an impractical toy, and considered number theory hopelessly abstract until cryptography arrived. The only way to make intellectual progress (either individually or as a group) is to explore the implications of interesting assumptions wherever they might lead. Unfortunately people love to argue about assumptions instead of getting anything done, though they can't really judge before exploring the implications in detail. Several smart people on LW are repeating my exact mistake about Newcomb's Problem now, and others find ways to commit the same mistake when looking at our newer ideas. It's so frustrating and uninteresting to read yet another comment saying my assumptions look unintuitive or unphysical or irrelevant to FAI or whatever. I'm not against criticism, but somehow such comments never blossom into interesting conversations, and that's reason enough to caution you against the way of thinking that causes them. MISTAKE #2: STOPPING WHEN YOUR IDEA SEEMS GOOD ENOUGH There's a handful of ideas that decision theory newbies rediscover again and again, like pointing out indexical uncertainty as the solution to Newcomb's problem, or adding randomness to models of UDT to eliminate spurious proofs. These ideas don't work and don't lead anywhere interesting, but that's hard to notice when you just had the flash of insight and want to share it with
3dcbe404-3a91-4814-a005-4af9d3ab808a
trentmkelly/LessWrong-43k
LessWrong
Tricksy probability problem I came up with this problem the other day. I don't have nearly enough math to solve it. Do any of you wise folk have insight? Imagine a sequence of randomly generated 0s and 1s. I start from nothing and generate this sequence one term at a time. I stop when I'm at the end of a streak of 1s which is at least one half the length of the total sequence. For example: 01 0010110101101011111111111111 010110111111 -EDIT: Obviously, this is wrong, because it would terminate at 01. Sorry. What is the average length of such sequences? I know that all of these sequences will eventually terminate. I just don't know if the length diverges, and I'm not sure how to deal with the problem.
da9b621f-7c49-476d-9daf-c1d95ebbea6e
StampyAI/alignment-research-dataset/arxiv
Arxiv
Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice 1. Introduction ---------------- Migration is a megatrend of our time. In the ongoing globalization, the number of international migrants has reached unprecedented heights. International migration is a complex phenomenon that includes both voluntary migration (Ottonelli and Torresi, [2013](#bib.bib32)), e.g., to enhance career opportunities, and forced migration, e.g., caused by conflicts (Yazgan et al., [2015](#bib.bib48)) and climate changes (Abel et al., [2019](#bib.bib3)). While the Covid-19 pandemic has temporarily slowed migration flows, the UN’s position is that “migration is part of today’s world” (Nations, [2020](#bib.bib31)). The UN expects the numbers to rise again and posits that well-managed migration can contribute to inclusive and sustainable development in countries of both origin and destination. Efficient and effective second-language acquisition is a vital component in sustainable international migration. The body of research literature points out language proficiency as a key to successful integration (Blake et al., [2017](#bib.bib9); Meer et al., [2019](#bib.bib29)), not the least to support rapid entry to the labor market. International migrants are a highly diverse set of people with substantially different capacities, needs, and opportunities for learning (Morrice et al., [2021](#bib.bib30)). The history of second-language acquisition has progressed from a focus on linguistics, e.g., inspired by Chomsky’s ideas about a universal grammar (Chomsky, [2014](#bib.bib14)), to a sociocognitive process, i.e., learning by combining cognitive involvement and social interaction (Larsen-Freeman, [2018](#bib.bib25)). The current school of thought stresses the importance of person-centered interactions. While such interactions undoubtedly offer effective learning environments, it remains unclear how they could scale to meet the needs of the migration megatrend. Digital solutions for scalable second-language acquisition are increasingly popular. As there is a high demand for language learning tools, several applications have entered the consumer market in the last decade111<https://blog.vsoftconsulting.com/blog/7-of-the-best-language-learning-chatbot-apps>. Duolingo, one of the most popular applications, has hundreds of millions of global users that are incentivized to keep learning through gamification (Shortt et al., [2021](#bib.bib41)). While Duolingo relies primarily on decontextualized grammar-translational exercises and audiolingual drills, there are indications that already such a simplistic digital interaction between a learner and a digital solutions can support second-language acquisition (Loewen et al., [2019](#bib.bib27)). Chatbots are another type of language learning tools (Pham et al., [2018](#bib.bib33)), available in Duolingo and other services such as Andy, Eggbun, and Babbel. While chatbots are closer to the person-related interactions recommended for second-language acquisition, we focus this study on a more advanced digital solution, i.e., conversational agents (ConvAg). State-of-the-art ConvAgs rely on generative dialog models (GDM). A GDM is a type of machine learning (ML) model that aims to produce human-like replies to user input. While many simple chatbots rely on pattern matching and string processing (Hussain et al., [2019](#bib.bib21)), the GDMs we consider in this work are open-domain (Roller et al., [2020](#bib.bib37)), i.e., they shall be able to maintain a dialog on arbitrary topics. This is clearly a significantly more challenging implementation task and, as we stress in this paper, a substantially tougher challenge for any quality assurance (QA) effort. The AI engineering community needs to develop approaches for QA of GDMs, as failures can have severe repercussions. We remind the reader of Microsoft’s experimental ConvAg Tay, removed from Twitter less than 24 hours after being deployed due to posting racist, sexist, and anti-Semitic tweets (Wolf et al., [2017](#bib.bib47)). In this paper, we argue that QA of ConvAgs’ GDMs must be tackled from two directions. First, the development organization must capture the often vague expectations on how a ConvAg shall perform in its operational context. Second, test engineers must develop automated test cases that verify that the expectations are met. To exacerbate the QA challenge, typical ConvAg development involves frequent retraining of the underlying GDM. ML development is highly exploratory and known to suffer from the CACE principle (Sculley et al., [2015](#bib.bib40)), i.e., “Changing Anything Changes Everything.” Often several models are retrained in parallel, and then the ML engineers select one for deployment – so called model selection. While there might be indications that one of the models is better, how can you know for sure? We need a reliable approach for benchmarking different models (Hasselbring, [2021](#bib.bib20)), i.e., test automation that helps us detect if any GDMs digress from acceptable behavior. The AI engineering community needs more research on QA of ML models trained for natural language processing (NLP). From the ML testing perspective, previous secondary studies show that NLP is underrepresented in the body of literature (Riccio et al., [2020](#bib.bib36); Zhang et al., [2022](#bib.bib52)). The need for NLP testing was also highlighted by Yoo in his keynote address at the DeepTest 2020 Workshop as he called for additional research (Yoo, [2020](#bib.bib50)). We acknowledge that QA of GDMs forces us to see the oracle problem (Barr et al., [2014](#bib.bib6)) in a new light. Also, compared to the comparatively well-studied topic of testing of ML models for computer vision, the discrete nature of natural language (Ferrone and Zanzotto, [2020](#bib.bib17)) poses new challenges, as a simple negation in a sentence inverts the meaning. We present results from ongoing action research in the context of QA of GDMs for , an evolving ConvAg designed for Swedish language practice. Emely evolves as part of an innovation project in the city of Helsingborg under development of a team from the AI consultancy company NordAxon. During the summer of 2021, researchers from RISE and Lund University joined to constitute a QA action team (Staron, [2020](#bib.bib42)). Together, we set out to explore three research questions in the context of ConvAg QA: * What are the requirements on Emely’s GDM? * How can test cases be designed to verify that the GDM satisfies the requirements? * How can the test cases be implemented to allow test automation throughout the evolution of Emely? Our pioneering paper responds to calls for additional research on ML testing for NLP. To the best of our knowledge, this is the first study to explore QA of a ConvAg development project in the field. In this paper, we present intermediate results and lessons learned from the first phase of our study. As Emely is a proprietary product, we complement our discussion with Facebook’s open-source ConvAg Blenderbot (Roller et al., [2020](#bib.bib37)). The test infrastructure we use for GDM testing is available on GitHub (rep, [2022](#bib.bib2)), but only the Blenderbot part is available for external replication. Note that the requirements reported in this paper are specified without strict quality targets for two reasons. First, the exact threshold values for Emely are business-critical and cannot be shared publicly. Second, Emely development follows the agile practice of using test cases as requirements (Bjarnason et al., [2016](#bib.bib8)). This practice supports integration of requirements engineering in the agile development context, i.e., exact threshold values can easily be tuned by developers updating the automated test cases. As there are no guidelines to base the quality targets on, we found an experimental manner to be the best way forward. The paper is organized as follows. First, Sec. [2](#S2 "2. Background and Related Work ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") presents background and related work followed by a technical overview of Emely in Sec. [3](#S3 "3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). Second, Sec. [4](#S4 "4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") describes our approach to action research and the results are discussed in Sec. [5](#S5 "5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). Third, Sec. [6](#S6 "6. Threats to Validity ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") and Sec. [7](#S7 "7. Lessons Learned ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") elaborate on threats to validity and lessons learned, respectively, before we conclude the paper in Sec. [8](#S8 "8. Conclusion and Future Work ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). 2. Background and Related Work ------------------------------- This section introduces AI quality for ConvAgs. Furthermore, we present related work on NLP testing and GDM evaluation. ### 2.1. AI Quality and ConvAgs Expressing quality expectations on a ConvAg is far from trivial. First, “quality” in general is a notoriously difficult aspect to put the finger on (Walkinshaw, [2017](#bib.bib43)). Quality is not a single concept, but rather a multi-dimensional patchwork of different system aspects that influence the users’ experience. Moreover, quality is inevitably subjective and largely lies in the eye of the beholder. For ConvAg quality, the user experience will vary depending on both the human conversationalist and the context, i.e., each dialog instance. Second, ConvAgs are created to mimic a human conversationalist. This is indeed a grand task, and an ideal solution would even pass the imitation game of the famous Turing test (Saygin et al., [2000](#bib.bib39)), i.e., the ConvAg would exhibit intelligent behaviour indistinguishable from that of a human. Assessing intelligent behavior is non-trivial, thus Turing proposed the role of a human interrogator in the original paper’s envisioned “imitation game.” Pinpointing what is needed to pass the test, i.e., specifying requirements on what the interrogator should check, is a formidable challenge. Finally, as also humans are known to fail Turing tests (Warwick and Shah, [2015](#bib.bib45)), we humbly accept that ConvAg QA is a daunting task that cannot be solved in a single study. In this paper, we use the following definitions from related work (Borg, [2021](#bib.bib10)): “MLWare is a subset of AI that, fueled by data, realizes functionality through supervised and/or unsupervised learning“ and “AI Quality is the capability of MLware to satisfy stated and implied needs under specified conditions while the underlying data satisfy the requirements specific to the application and its context.” A fundamental aspect of the latter definition is that quality is about both stated and implied needs. As part of our study, we conduct requirements engineering (RE) to transfer an initial set of implied needs on ConvAgs to specified requirements that lay the foundation for subsequent test case design and implementation. ### 2.2. NLP Testing and GDM Evaluation The software testing community has published relatively few papers on ML testing for NLP applications. Compared to ML testing for image classification and recognition, especially for automotive perception systems (Borg et al., [2019](#bib.bib12)), few studies specifically target NLP (Riccio et al., [2020](#bib.bib36); zhang2020machine). Exceptions include a study by Kim and Yoo that explores how surprise adequacy, a test adequacy metric developed for neural network input (Kim et al., [2019](#bib.bib23)), generalizes from image input to the NLP context (Kim and Yoo, [2021](#bib.bib24)). The most similar work to ours is the dialog testing tool DialTest by Liu et al. (Liu et al., [2021](#bib.bib26)), which was not published when we initiated our action research study. In the NLP research community, several papers propose metrics to support evaluation of language generation. Deng et al. presented a framework for natural language generation that covers language compression, transduction, and creation (Deng et al., [2021](#bib.bib15)). The latter includes the dialog generation that constitutes a critical feature in ConvAgs. The authors’ primary contribution is a set of metrics that corresponds to information alignment between natural language input, output, and the overall context. Based on an analysis of an established human annotation dataset, they conclude that the metrics correlate with human judgments. Yeh et al. recently conducted a study comparing a set of 23 metrics for dialog generation on 10 datasets (Yeh et al., [2021](#bib.bib49)). They conclude that USR, GRADE, DEB, and USL-H perform the best for evaluating response generation. Several researchers argue that a combination of metrics are needed to reliably evaluate generative language models. In the context of machine translation, Yuan et al. propose BARTScore (Yuan et al., [2021](#bib.bib51)), including evaluation based on seven quality categories: 1) Informativeness, 2) Relevance, 3) Fluency, 4) Coherence, 5) Factuality, 6) Semantic Coverage, and 7) Adequacy. These categories partly overlap the categories we found relevant for Emely, but we instead propose a customized list for GDMs. 3. Case description: Emely --------------------------- Emely is a ConvAg designed to help newcomers in Sweden practice Swedish, i.e., to accelerate second-language acquisition. The long-term goal is to provide a scalable approach to Swedish language practice that will help migrants enter the labor market faster (Blake et al., [2017](#bib.bib9); Meer et al., [2019](#bib.bib29)). Emely lets users practice job interview sessions in a protected setting, i.e., interactive person-centered dialogs as recommended for effective second-language acquisition (Larsen-Freeman, [2018](#bib.bib25)). Fig. [1](#S3.F1 "Figure 1 ‣ 3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") provides an impression of Emely’s visual appearance. ![Emely’s visual appearance.](https://media.arxiv-vanity.com/render-output/7809767/emely.png) Figure 1. Emely’s visual appearance. Fig. [2](#S3.F2 "Figure 2 ‣ 3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows the high-level architecture of Emely. Note that Emely is a proprietary solution, thus we restrict the architecture description to a high level and quantitative information is reported in vague terms. Third-party components are depicted in gray and nationality flags show the content of the natural language data. Users practicing Swedish interact with Emely through a web UI (A). The user speaks to Emely and the spoken language is transformed to text (B). The text passes through a Toxic Filter (C) consisting of rules that identify hate speech and toxic language. If the user expresses unacceptable sentences, Emely will shortcut the information flow in Fig. [2](#S3.F2 "Figure 2 ‣ 3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") already at this point, i.e., reject the input, and simply reply “Can we please talk about something else?” If not rejected, the Swedish text is subsequently translated to American English (D). ![High-level Emely architecture. Gray boxes indicate third-party components.](https://media.arxiv-vanity.com/render-output/7809767/architecture.png) Figure 2. High-level Emely architecture. Gray boxes indicate third-party components. The English text is then processed by a custom dialog manager module (E) that relies on ML classifiers trained specifically for processing the user input to Emely. These ML models have been trained using the open-source framework Rasa222<https://rasa.com/>. The dialog manager controls the ongoing dialog and, among other things, can bypass the GDM to instead provide high-quality answers to certain input, such as answering the question “What is the salary for this position?” with “I cannot discuss the salary with you at this stage of the process.” The rightmost component in Fig. [2](#S3.F2 "Figure 2 ‣ 3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") is the GDM (F), i.e., the component that constitutes the software under test. The GDM is based on publicly available pre-trained Blenderbot models by Facebook AI333<https://huggingface.co/facebook>. Blenderbot is an open-source open-domain ConvAg with pre-trained GDMs in different sizes for conversations in English. NordAxon and the city of Helsingborg manually created an initial realistic dialog dataset of interview sessions, expanded it with transcriptions of mock interviews from YouTube, and expanded the content using GPT-3 (Brown et al., [2020](#bib.bib13)). From this starting point, the dataset has gradually grown during the Emely development. NordAxon uses transfer learning to adapt generic Blenderbot models to custom GDMs for Emely. The dialog manager forwards input from the user to Emely’s custom GDM and a reply in English is provided. The lower part of Fig. [2](#S3.F2 "Figure 2 ‣ 3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows how information from the GDM is processed on the way back to the user. First, also considered part of the conversational engine, rule-based processing (G) provides a filter for the GDM output to ensure that no unacceptable content is forwarded to the user. These rules primarily rely on pattern matching and string processing, guided by hours’ worth of system testing. The natural language text is then translated from American English to Swedish (H) and a third-party text to speech component (I) generates spoken language in the local Scanian dialect – hence, the Scanian flag. Finally, Emely reads the output aloud complemented by the corresponding text. An important quality requirement is that the overall process must be fast enough to result in a realistic interview session without unreasonable delays. 4. Research Method ------------------- We are currently conducting action research, i.e., “a disciplined process of inquiry conducted by and for those taking the action” (Sagor, [2000](#bib.bib38)). Action research is an appropriate research method in software engineering when the main goal is to support change in a development context and the researcher is a member of the team responsible for the change (Wohlin and Runeson, [2021](#bib.bib46)). We describe our work according to Staron’s recently published model for action research, which explicitly depicts knowledge, learning, and theory as secondary outputs from the endeavor – such as this paper. Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows an overview of the research project, organized into two phases. Phase I describes the action research cycles and Phase II illustrates how we worked on systematic evaluation of the research output and generalization beyond Emely. To support the flow of the paper, we present details related to the method for activities A)–C) in Sec. [4.3](#S4.SS3 "4.3. Requirements Engineering and Testing ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). ### 4.1. Phase I: Action Research Cycles The upper part in Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") illustrates the action research cycle and its five elements: 1) Diagnosing, 2) Action Planning, 3) Action Taking, 4) Evaluating, and 5) Learning. Full cycles were iterated on a weekly basis during three months of 2021. The action research during this time involved three distinct steps, i.e., A) Requirements Elicitation, B) Test Design, and C) Test Implementation. As indicated in Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"), a literature study was a continuous activity during the work, including gray literature on ConvAgs and chatbot testing. ![Overview of the action research study.](https://media.arxiv-vanity.com/render-output/7809767/overview.png) Figure 3. Overview of the action research study. While there has been staff turnover during the project, the core action team consisted of nine members. Two researchers from RISE, two researchers from Lund University, four AI developers from NordAxon, and a product owner from the city of Helsingborg. As this project was conducted during the Covid-19 pandemic, remote work was the norm. The researchers integrated with the development team by joining NordAxon’s regular virtual meetings, i.e., two regular status meetings per week covering synchronization and sprint planning. On top of that, additional meetings were organized for discussion of technical details and evaluation of results. Between the live meetings, a shared Slack workspace ensured another effective communication channel. The communication channels, combined with shared source code and ML models, provided a project context geared for rapid feedback. The iterative work enabled the evolution of three deliverables (cf. the gray Research Artifacts in Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice")) customized for Emely: a Requirements Specification, a Test Suite for a selection of the requirements, and the corresponding Test Results. After three months of action research cycles, the resulting automated test cases could be executed in the Emely development environment. The action team interwove the five elements of action research with the ongoing evolution of Emely corresponding to version 0.2, 0.3, and 0.4: 1. Diagnosing initiated weekly sprints with an exploration of the current state of the QA initiative, e.g., to what extent were the elicited requirements complete and did the implemented test cases create value? 2. Action Planning was integrated as a natural part of the sprint planing in the development project. In practice, we jointly prioritized backlog items and assigned resources for RE and test development. 3. Action Taking The interventions we did to support QA of GDMs in the organization involves RE (elicitation, analysis, specification, and validation) and test development (design and implementation). 4. Evaluating We continuously assessed the interventions from a technical perspective with the NordAxon developers. Moreover, we also validated requirements and test verdicts with the product owner on a weekly basis. 5. Learning During the cycles, the researchers in the team used retrospectives to discuss what lessons learned could be generalized to ConvAg QA beyond Emely. In this process, we collected notes that later initiated the writing of this paper during Phase II. ### 4.2. Phase II: Generalizing Knowledge The lower part of Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows Phase II of the project that started from the two research artifacts developed during Phase I. In Phase II, the research project shifted its focus to careful analysis of the test suite, validation of the test results, generalizing results, and reporting. During Phase II, the research team resorted to joining one weekly status meeting to discuss the progress and to ask clarification questions for the paper writing. Phase II lasted for five months and allowed a longitudinal perspective on ConvAg QA. The core activities during Phase II related to executing the test suite for different GDMs and to validate the results. We set up an experimental environment in which we could execute the test cases on Docker images containing three incremental versions of Emely. To allow comparisons with an open-source ConvAg, we also provide test results for Blenderbot-400M-distill444<https://huggingface.co/facebook/blenderbot-400M-distill> as a benchmark. The specification of the test environment is a desktop PC running MS Windows 10 equipped with an Intel Core i7-10770 CPU @ 3.80 GHz, 16 GB RAM, and an Nvidia 1080Ti graphics card. We provide a replication package for Blenderbot in the GitHub repository (rep, [2022](#bib.bib2)). Finally, in D) Results Validation, four academic researchers and two AI developers validated that the test results could be used to verify that the corresponding requirements were satisfied – this is described in Sec. [5.3](#S5.SS3 "5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). ### 4.3. Requirements Engineering and Testing An initial activity toward QA of a GDM is to clarify expectations. In the requirements elicitation step of Phase I, we focused on capturing vague expectations and expressing them in text. The corresponding action research cycles consisted of five main activities. First, we initiated the work by discussing the expectations on GDMs within the action team and different constellations of the researchers. Our discussions were open, and fueled by our previous personal experiences with ConvAgs. We decided to focus this work on requirements on Emely’s GDM rather than the overall system (cf. F) in Fig. [2](#S3.F2 "Figure 2 ‣ 3. Case description: Emely ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice")). Second, we did an informal market analysis and surveyed gray literature (Garousi et al., [2020](#bib.bib18)) on ConvAg and chatbot quality, e.g., blog posts and technical reports. Third, we iteratively elicited, analyzed, and specified requirements. This activity was the largest in terms of effort. Fourth, we organized the requirements based on the categories proposed by Martin et al. (Martín et al., [2021](#bib.bib28)), i.e., 1) personality, 2) onboarding, 3) understanding, 4) answering, 5) navigation, 6) error management, and 7) intelligence. Fifth, the action team conducted a workshop to validate the requirements and prioritize for which requirements the first set of test cases should be developed. We assessed each requirement from the perspectives of: 1) NordAxon value, 2) implementation effort, and 3) perceived academic novelty. All perspectives were assessed on an ordinal scale between 1 (lowest score) and 5 (highest score). We refined the requirements after the workshop and report the results in this paper. We iteratively designed test cases (cf. B) in Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice")) targeting the prioritized requirements as part of the action research cycles. Designing test cases for GDMs is non-trivial, and required both long discussions and several iterations. Moreover, the test implementation (cf. C) in Fig. [3](#S4.F3 "Figure 3 ‣ 4.1. Phase I: Action Research Cycles ‣ 4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice")) was intertwined with the test design. Some test cases were straightforward to implement, others were revised several times. The results are presented in Sec. [5](#S5 "5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). 5. Results and Discussion -------------------------- This section reports our results and discusses the three RQs. ### 5.1. RQ1: Requirements The action team developed a set of 38 requirements for Emely’s GDM. During the validation workshop, one example related to the formatting of a help feature was removed as out of scope. The remaining 37 requirements, all considered valid, were further refined after the meeting and the outcome is listed in Table [1](#S5.T1 "Table 1 ‣ 5.1. RQ1: Requirements ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). The letters in the identifiers show how the requirements map to the categories proposed by Martin et al. (Martín et al., [2021](#bib.bib28)), i.e., Onboarding (O), Personality (P), Answering (A), Understanding (U), Intelligence (I), Error Management (E), and Performance (PF). Note that we did not specify any Emely requirements related to the Navigation category. In the context of the requirements, we define a set of key terms. Dialog: An instance of an interview session. Prompt: A single string of user input to the GDM. Reply: A single string of output from the GDM. Digressed dialog: A dialog that needs an intervention to refocus because it has left the main topic. ![](https://media.arxiv-vanity.com/render-output/7809767/table_reqts.png) Table 1. Requirements on Emely’s dialog generating model. Scores are in the range 1–5, with 5 being the highest. Table [1](#S5.T1 "Table 1 ‣ 5.1. RQ1: Requirements ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") also presents how the action team prioritized the requirements before initiating the test development. As described in Sec. [4](#S4 "4. Research Method ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"), we assessed all requirements from three dimensions resulting in scores between 1 and 5. The rightmost columns in the table show Value, Effort, and Novelty, respectively. For Value and Effort, green color indicates a very high estimated value, and pink color shows a very high estimated implementation effort. Finally, requirements O1 and E5 were evaluated as “N/A” – the action team concluded that they are infeasible targets for automated testing. Based on discussions within the action team, we decided to proceed with test design for 15 requirements as highlighted in bold font in Table [1](#S5.T1 "Table 1 ‣ 5.1. RQ1: Requirements ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). Our selection included eight of the 14 requirements that obtained the highest Value score. Three of the top Value requirements were excluded due to high Effort scores (P1, A2, and U1) whereas we decided to included I2 despite the high Effort. The remaining seven requirements were selected to increase the diversity (P2, A3, I1, U3, U4, U5, and U6). ### 5.2. RQ2: Test Design Our approach to GDM testing relies on randomness in the test case generation. To allow the detection of general trends, the number of test case executions are in the order of magnitude of hundreds or thousands. The approach corresponds to soak testing as defined in ISO/IEC/IEEE 29119 (ISO/IEC/IEEE, [2013](#bib.bib22)). Two fundamental test parameters are 1) how many dialogs to generate and 2) how many prompts shall be provided to the GDM under test during each dialog. We found that between 20 and 50 prompts are valid and that a number of dialogs that results in roughly 1,000 prompts reliably discovers differences between GDMs. For each of the selected requirements in Table [1](#S5.T1 "Table 1 ‣ 5.1. RQ1: Requirements ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"), the corresponding approach to testing followed one of five test structures: * Question-Answer (Q-A) * N-gram Counting (NC) * Coherence Checks (CC) * Toxicity Analysis (TA) * Simple Checks (SC) The Q-A structure has three main test steps: 1) provide specific information, 2) request the same information, 3) assert that the two pieces of information match. We use this for both open questions and closed yes/no questions. Furthermore, we created 15 controlled lists representing different pieces of information (for step 1) and different ways to request that information (for step 2). For open questions, we use a BERT-SQuAD model (Devlin et al., [2018](#bib.bib16); Rajpurkar et al., [2016](#bib.bib34)) to identify what part of the reply from the GDM is the answer – this is used for the assessment in step 3). Wang et al. use a similar QA approach in their work on evaluating text summaries (Wang et al., [2020](#bib.bib44)). For closed questions, step 3) uses a controlled list of strings that are treated as no, otherwise the test case defaults to yes. The other four test structures are less complex. However, all but SC also use third-party libraries. Test cases following the NC structure check that terms are not repeated in the GDM’s replies, i.e., “stuttering,” using NLTK (Bird et al., [2009](#bib.bib7)). Test cases of the CC structure uses a Sentence-BERT model (Reimers et al., [2019](#bib.bib35)) to evaluate the coherence of the content during dialogs and between subsequent replies. The TA structure uses the Detoxify model (Hanu and Unitary team, [2020](#bib.bib19)) to assess whether the language in the replies from a GDM is acceptable. Finally, the SC structure relies on trivial Python asserts. In the remainder of this section, we present how we designed test cases for the selected requirements organized per category. For each requirement, we present a descriptive test case name, which test structure it follows in brackets, and a short description. #### 5.2.1. Personality and Answering We designed three test cases to verify that the requirements related to personality and answering are satisfied. Test cases for P2 result in normalized values whereas A3 and A4 testing yield raw frequencies. * Toxicity [TA]. Send prompts to the GDM and assess the replies using the Detoxify model. Detoxify returns normalized values between 0 and 1 corresponding to the following categories (Hanu and Unitary team, [2020](#bib.bib19)): 1) Toxicity, 2) Severe toxicity, 3) Obscene, 4) Threat, 5) Insult, 6) Identity Attack, and 7) Sexually explicit. All seven values are used to verify the satisfaction of P2. * Nagging [SC] A simple check to count the number of times during a dialog that the GDM provides the same verbatim question that it has already provided during a dialog. * Stuttering [NC] Count the number of immediate N-gram repetitions in the GDM’s replies. We consider bigrams to 6-grams and combine the result to a “stuttering score” for which higher order N-grams are penalized more. #### 5.2.2. Intelligence Below we list the eight test case designs we developed to verify that the selected Intelligence requirements are met. Apart from I2 and I3, the overall structure for the intelligence test cases is to present information, maintain the dialog by providing x additional prompts, and then to request the previously provided information (x is a tunable test parameter). Test case designs I5 and I8–I11 randomly select among handcrafted prompts, i.e., Controlled Test Data from the file testset\_database.py on GitHub (rep, [2022](#bib.bib2)). * Self consistency [Q-A] 1) Request information about Emely. 2) Provide x prompts. 3) Request the same information about Emely again. 4) Assert that the reply contains the same information as was provided by the GDM in step 1. * Dialog coherency [CC] 1) Provide x prompts. 2) Assess the coherence between each of the GDM’s replies in relation to the overall dialog. * Reply coherency [CC] 1) Provide x prompts. 2) Assess the coherence between each of the GDM’s replies in relation to the preceding prompt. * Memory assessment [Q-A] 1) Provide specific information a in a prompt (e.g., “I studied at Shiraz University”). 2) Provide x prompts. 3) Provide a prompt requesting information a. 4) Assert that the GDM’s reply contains a. * Diverse information [Q-A] 1) Provide information a as formulated in one out of several handcrafted prompts. 2) Provide a prompt requesting information a. 3) Assert that the GDM’s reply contains a. * Contextual information [Q-A] 1) Provide one out of several handcrafted prompts containing two sentences in which the first gives context and the second adds specific information a. 2) Request information a for the specific context. 3) Assert that the GDM’s reply contains a. * Diverse questions [Q-A] 1) Provide specific information a in a prompt. 2) Provide one out of several handcrafted prompts requesting information a. 3) Assert that the GDM’s reply contains a. * Contextual questions [Q-A] 1) Provide specific information a in a prompt. 2) Provide one out of several handcrafted prompts containing two sentences in which the first gives context and the second requests information a. 3) Assert that the GDM’s reply contains a. #### 5.2.3. Understanding The following list reports the four test case designs we used to verify that the selected Understanding requirements are satisfied. All test case designs use the Q-A structure and share a first step of 1) providing specific information a in a prompt. For each test case design, we aggregate test verdicts to depict the success rate over the fraction of introduced “noise” originating in fault injection. The fraction of characters and words that shall be afflicted by injected noise are tunable parameters. * Typo robustness [Q-A] 2) Request information a through a prompt containing randomly inserted character-level typing mistakes. 3) Assert that the GDM’s reply contains information a. * Word order robustness [Q-A] 2) Request information a through a prompt containing randomly swapped words. 3) Assert that the GDM’s reply contains information a. * Omitted word robustness [Q-A] 2) Request information a through a prompt containing randomly dropped words. 3) Assert that the GDM’s reply contains information a. * Synonymy robustness [Q-A]. 2) Request information a through a prompt containing words randomly replaced by synonyms (using NLTK (Bird et al., [2009](#bib.bib7))). 3) Assert that the GDM’s reply contains information a. ### 5.3. RQ3: Test Implementation Fig. [4](#S5.F4 "Figure 4 ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows the dual GDM structure used in our test implementation. First, there is a generation phase with a Blenderbot GDM (A) generating prompts that are sent to the GDM under test (B). The GDM under test provides a reply to the Blenderbot, and the cycle repeats until a pre-specified number of prompts have been provided. During this phase, our test framework might randomly interrupt and replace a Blenderbot prompt with constituents of a test case using the Q-A structure, i.e., injecting prompts from the Controlled Test Data (C). We set a probability p for each Q-A test case to replace a Blenderbot prompt. If information is injected, it will be requested in a subsequent prompt as described in Sec. [5.2](#S5.SS2 "5.2. RQ2: Test Design ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). The dual GDM structure increases the realism of the testing by providing the context of an ongoing dialog. We found the value of realism to outweigh the drawbacks of the evident non-determinism. ![The test implementation with dual GDMs.](https://media.arxiv-vanity.com/render-output/7809767/impl.png) Figure 4. The test implementation with dual GDMs. The right part of Fig. [4](#S5.F4 "Figure 4 ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows how the dialog resulting from the generation phase is forwarded to the Analysis step (D). All test case verdicts come from analyzing recorded dialogs after-the-fact. Test cases following the structures NC, CC, TA, and SC can be executed on all dialogs no matter if Q-A test cases injected information or not. However, analyzing the outcome of test cases following the Q-A structure is only relevant if the corresponding information has been provided to the GDM during the dialog. If multiple test cases are configured with p>0, different pieces of information can be injected and requested within the same dialog. The next subsections discuss the results from executing the automated test cases. #### 5.3.1. Personality and Answering Table [2](#S5.T2 "Table 2 ‣ 5.3.1. Personality and Answering ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows the results from testing the Personality and Answering requirements, i.e., Toxicity, Nagging, and Stuttering. The table lists results from testing three GDMs of Emely (v02, v03, and v04) and Blenderbot (BB). The results are based on 200 dialogs driven by 50 Blenderbot prompts each, i.e., analyses of 10,000 GDM replies. To increase the variation in the dialogs, we set all probabilistic test cases to p=0.05. Toxicity is a subjective measure. We argue that recall (i.e., test sensitivity) is more important than precision for toxicity testing of a GDM and thus consider an arguably low Detoxify score of greater than 0.1 as the threshold. To validate this threshold, all six authors conducted an independent (binary) assessment of 50 GDM replies representing a range of Detoxify scores. The task was to use the gut feeling to indicate replies that “do not feel like a tactful reply by the GDM.” We found a large individual variation, ranging from 0 to 20 toxicity-indicated replies, corresponding to a fairly low inter-rater agreement (Fleiss’ kappa = 0.25). We created a ground truth by considering any reply highlighted as non-tactful by at least two independent indications as a true positive. The test results, corresponding to toxicity testing using the 0.1 threshold on the sample, represents a recall of 0.88 and a precision of 0.28. The test results for P2 - Toxicity in Table [2](#S5.T2 "Table 2 ‣ 5.3.1. Personality and Answering ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") show i) the fraction of toxic replies (out of 10,000), ii) the 75% percentile for Detoxify scores, and iii) their standard deviation. | | | | | | | | --- | --- | --- | --- | --- | --- | | | | v02 | v03 | v04 | BB | | P2 - Toxicity | Toxic | 391/10k | 350/10k | 398/10k | 526/10k | | 75% | 0.002 | 0.003 | 0.003 | 0.003 | | Std | 0.083 | 0.078 | 0.077 | 0.070 | | A3 - Nagging | Nag | 196/200 | 200/200 | 200/200 | 197/200 | | #Nags | 670 | 1176 | 1159 | 617 | | Median | 2 | 3 | 2 | 2 | | A4 - Stuttering | Stut | 192/200 | 183/200 | 20/200 | 190/200 | | 75% | 1.077 | 0.143 | 0.299 | 0.133 | | Std | 28.83 | 0.025 | 0.032 | 0.022 | Table 2. Test results related to Personality and Answering. Results in bold font are discussed in the text. Table [2](#S5.T2 "Table 2 ‣ 5.3.1. Personality and Answering ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") also shows the test results for A3 - Nagging. Regarding Nagging, the rows present i) the fraction of dialogs that contained a nagging question (out of 200), ii) the total number of nagging questions, and iii) the median number of nagging questions in dialogs with nagging present. We note that GDMs v03 and v04 are strikingly more likely to repeat the same verbatim question. Finally, Table [2](#S5.T2 "Table 2 ‣ 5.3.1. Personality and Answering ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") reports the test results for A4 - Stuttering. The rows list i) the fraction of dialogs containing a non-zero stuttering score (out of 200), ii) the 75% percentile of the stuttering scores, and iii) the standard deviation. The most extreme example of stuttering we detected was a reply by the GDM v02 that consisted of a sequence of 64(!) “z”. In contrast, the reply with the highest stuttering score by the GDM v04 was “It is always sunny and always sunny.” To provide another perspective on how the requirement is met, Fig. [5](#S5.F5 "Figure 5 ‣ 5.3.1. Personality and Answering ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows distributions and probability density functions for stuttering scores representing 200 dialogs for each GDM. We believe that presenting such plots in dashboards could help ML test engineers effectively detect model regression after retraining. ![Distributions of stuttering scores (on the x-axes). Y-axes show frequencies (note the extreme scale of v02).](https://media.arxiv-vanity.com/render-output/7809767/dists.png) Figure 5. Distributions of stuttering scores (on the x-axes). Y-axes show frequencies (note the extreme scale of v02). We conclude that the test cases developed for the three selected requirements under Personality and Answering detected meaningful differences between the GDMs. Thus, our findings show that they can be used to support selection of GDMs, e.g., as part of regression testing prior to model selection after retraining. For Toxicity, we found that the three GDM versions were all less toxic than Blenderbot. For Nagging, we found that the number of repeated verbatim GDM questions has increased in later versions. Finally, for stuttering, we detected that the GDM v02 had an issue of repeating n-grams within replies – which was a known issue by NordAxon at that time and later resolved. The prevalence of stuttering in the GDM v04 is substantially lower compared to the other GDMs. #### 5.3.2. Intelligence, Understanding, and Robustness Table [3](#S5.T3 "Table 3 ‣ 5.3.2. Intelligence, Understanding, and Robustness ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") summarizes the results from testing the Intelligence requirements. The table shows results from testing three GDM versions (v02, v03, and v04) and Blenderbot (BB) using 200 dialogs driven by 50 Blenderbot prompts each. Table [3](#S5.T3 "Table 3 ‣ 5.3.2. Intelligence, Understanding, and Robustness ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows three values for each requirement: i) the fraction of dialogs with failed test cases, ii) the 75% percentile for the number of failed test cases per dialog, and iii) the standard deviation. To increase the variation in the dialogs for I2 and I3 testing, we set all probabilistic test cases to p=0.05. For all other test cases, we set p=0.05 for the corresponding injection and kept all others at 0. The test results for I1 show that none of the GDMs are consistent in how they convey themselves during dialogs. Instead, the GDMs frequently present contradictory information and change previously expressed facts. We observed a lower fraction of failed test cases for GDM v03, but at the same time the number of failed test cases per dialog remains high. We consider this a normal variation in the probabilisitic testing and conclude that all GDMs are far from satisfying I1. Coherence is a subjective quality just like toxicity. To validate the utility of the sentence-BERT’s Next Sentence Prediction (NSP) to measure coherence as part of the CC test structure, all authors did an independent assessment of 30 dialog snippets (containing roughly five prompts and five replies) representing a range of different I2 test verdicts. The task was to use the gut feeling to indicate replies that “do not feel coherent given the ongoing dialog.” The inter-rater agreement was higher than for toxicity (Fleiss’ kappa = 0.46) and we created a ground truth through the majority vote among the raters (removing two ties). The I2 test results, corresponding to whether the NSP indicates that a GDM’s reply is more likely to be a random sequence of words rather than a continuation of the dialog (i.e., coherent), represents a recall of 0.50 and a precision of 0.85. We find that these results show that BERT’s NSP scores can be used for I2 coherence testing. Table [3](#S5.T3 "Table 3 ‣ 5.3.2. Intelligence, Understanding, and Robustness ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") shows the results from I2 testing. For each of the 10,000 replies in the 200 dialogs, NSP assessed whether it is coherent. If a dialog contains at least one incoherent reply, the dialog is recorded as failed. We find that the GDMs v03 and v04 are more coherent than v02, i.e., 79 and 71 failed dialogs compared to 122. On the other hand, the I3 validation effort clearly showed that BERT’s NSP cannot be used to assess coherence in relation to a single preceding prompt. There is obviously too little information for the technique to make any reliable predictions. Thus, we do not report any results from testing I3 – Table [3](#S5.T3 "Table 3 ‣ 5.3.2. Intelligence, Understanding, and Robustness ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice") only lists “Omitted.” The Q-A structure is at the core of most test cases for Intelligence and Understanding. As described in Sec. [5.2](#S5.SS2 "5.2. RQ2: Test Design ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"), the I5 test cases provide information to the GDM and request the same information a few prompts later. Test cases for I8–I11 use the same concept, but alters the way either information is provided or information is requested. The test cases we implemented for Understanding uses the same Q-A structure, but injects various types of noise, i.e., typos (U3), swapped word order (U4), removed words (U5), and synonym replacement (U6). Test results related to I5 also had to be carefully validated. While the results related to the memory of GDMs are less subjective than the toxicity and coherence counterparts, the possible language variation makes it difficult to programatically distinguish between correct and incorrect replies – the so called test oracle problem (Barr et al., [2014](#bib.bib6)). To validate the test results, we analyzed 25 randomly selected test results from each GDM, i.e., the output from BERT-SQuAD and the test case verdict. Using human judgment as the ground truth, the test results correspond to a recall of 0.85 and a precision of 0.97. We further analyzed the relatively high false negative rate. Eleven passing test cases should instead have failed. Among these, we found two main tendencies. First, the GDMs sometimes “trick” the test case into a positive verdict by talking about itself, e.g., “Where did I use to work?” followed by “I used to work at a fast food restaurant.” Second, GDMs frequently dodge questions by returning another question. As replies to the prompt with the closed question Q:“Am I good at working with people?” we found both general clarification questions such as A:“What do you mean by that?” and more complex formulations, e.g., A:“What do you mean by people skills?” Defaulting such answers to yes, as described in Sec. [5.2](#S5.SS2 "5.2. RQ2: Test Design ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"), resulted in false negatives. The recall is still sufficient to allow GDM testing, thus we present I5 test results in Table [3](#S5.T3 "Table 3 ‣ 5.3.2. Intelligence, Understanding, and Robustness ‣ 5.3. RQ3: Test Implementation ‣ 5. Results and Discussion ‣ Quality Assurance of Generative Dialog Models in an Evolving Conversational Agent Used for Swedish Language Practice"). The results, corresponding to I5 test case injection set to p=0.1, clearly show that all GDMs display inadequate memory, i.e., between 173 and 199 dialogs with injected I5 test cases fail. Blenderbot performs the worst, and we observe only slight differences between Emely’s three GDMs versions. We conclude that the test cases designed for I1, I2, and I5 are sufficiently sensitive to detect differences between different versions of GDMs. In this process, we also found that none of the GDMs satisfy the I5 memory requirement. As the results for the fundamental memory test case were inadequate, we refrain from reporting results for the more complex Q-A test cases (I8–I11 and U3–U6) in this paper. The basic I5 test case design was troublesome enough. | | | | | | | | --- | --- | --- | --- | --- | --- | | | | v02 | v03 | v04 | BB | | | | | --- | | I1 - Self | | consistency | | Failed | 182/197 | 136/196 | 180/198 | 161/194 | | 75% | 3 | 5 | 5 | 5 | | Std | 2.606 | 3.210 | 2.668 | 2.848 | | | | | --- | | I2 - Dialog | | coherence | | Failed | 122/200 | 79/200 | 71/200 | 114/200 | | 75% | 2 | 2 | 1 | 2 | | Std | 0.933 | 0.641 | 0.499 | 0.620 | | I3 - Reply coherence | Omitted | | I5 - Memory | Failed | 179/200 | 181/200 | 173/200 | 199/200 | | 75% | 2 | 3 | 2 | 3 | | Std | 1.462 | 1.772 | 1.378 | 1.533 | | I8 – I11 (Diversity and Context) | Omitted | | U3 – U6 (Robustness) | Omitted | Table 3. Test results related to Intelligence. Results in bold font are discussed in the text. Several results, e.g. everything about Understanding, are omitted after failed validation. 6. Threats to Validity ----------------------- We discuss the most important threats to the validity of our action research study as proposed by Staron (Staron, [2020](#bib.bib42)). Construct validity. The concept of quality of ConvAgs is at the heart of our work. Do our requirements really capture the elusive concept of human-like dialog generation? Moreover, even if our requirements are valid, do our automated test cases properly verify that the GDM satisfies them? Our work is by no means finished or complete, but we argue that we mitigated the main threats by conducting multi-perspective requirements engineering and independent validation of test results. Internal validity. The only difference between the test subjects are the GDMs, i.e., no rule-based processing differs. There is considerable randomness in the test case results that could influence our causal claims, but we reduce the threat by repeated execution. Conclusion validity. Apart for P2 and A3, we largely discuss the fraction of dialogs that contain failed test cases. Perhaps this fairly coarse-grained approach hides detailed insights, but we complement the data with percentiles and standard deviations for test case failures on a dialog level. Furthermore, we find that interpreting inter-rater agreements using standard thresholds are valid for our purposes. External validity. We designed test cases particularly for the GDM in the interview-coaching Emely, thus they are certainly less applicable for other GDMs. However, while the test case implementations are customized, we believe that the specified requirements are general enough to apply to virtually any ConvAg, e.g., the general purpose Blenderbot. Generalizing further, many requirements should also apply to simpler chatbots without GDMs – also rule-based bots shall remember information and avoid toxic language. 7. Lessons Learned ------------------- Action research results in valuable lessons learned. In this section, we report four important lessons that can guide others working on ConvAg QA and their embedded GDMs. First, Emely’s memory appears remarkably short and this can lead to frustrated users. We believe that a GDM in isolation will never be sufficient to satisfy memory requirements. Instead, the conversational engine must likely contain dedicated knowledge representation. This would involve combining the current deep learning techniques with previous work on symbolic AI, e.g., what can be found in ConvAgs from the 90s (Allen et al., [1995](#bib.bib4)). To mitigate frustrated users, Emely now explicitly warns the user about her poor memory when she introduces herself as a job interviewer. Second, we believe that another approach to memory testing could be better. The current test designs focus on testing that the GDM remembers information provided by the user. Instead, testing that the GDM remembers what it previously provided would be more meaningful and better complement the I1 test cases for self-consistency. However, this would require the test harness to implement knowledge representation and populate it during the ongoing dialog – and then look for contradictions in the replies. Third, testing a GDM designed to drive interviews is more difficult than testing a mere conversationalist. As a test engineer, you want to control the situation. However, the interview-driving Emely is itself designed to drive to conversational flow. Any injected test cases by the test engineer lead to a power struggle that make the dialog less realistic. Fourth, the GDM lacks the concept of self. As an anthropomorphic (although disembodied) ConvAg, dialogs that violate everyday constructs such as “you” and “me” might deteriorate the user’s perception of Emely (Araujo, [2018](#bib.bib5)). Our test cases triggered such behavior, e.g., Q:“Do I enjoy working with people?” followed by the reply A:“Yes, I love working with people!” Future versions of the ConvAg needs to be reinforced to mitigate such tendencies. 8. Conclusion and Future Work ------------------------------ QA of ConvAgs that rely on GDMs requires novel approaches to AI engineering. We conducted action research in the context of Emely, an evolving solution for person-centered interactive second-language acquisition (Larsen-Freeman, [2018](#bib.bib25)), intended to accelerate migrants entry to the labor market (Blake et al., [2017](#bib.bib9); Meer et al., [2019](#bib.bib29)). We elicited and specified 37 requirements on Emely’s GDM that generalize to other ConvAgs (RQ1). Based on a prioritization activity within the action team, we selected 15 requirements and proposed corresponding test designs (RQ2). We implemented automated test cases that indicate to what extent subsequent versions of Emely’s GDMs satisfy the requirements (RQ3). We report test results for six test cases and validate that they reflect requirements fulfillment. This action research endeavor will continue. Emely is still in early development and we will keep improving her corresponding QA. Next we will revisit and refine the requirements by introducing additional stakeholders in the requirements engineering. Furthermore, we will refactor the test infrastructure and execute test cases in an MLOps setting (Borg, [2022](#bib.bib11)). Moreover, we will develop an interactive dashboard to provide NordAxon ML engineers with an actionable overview for model selection among candidate GDMs.
754a01e8-e91b-4cdc-a549-d1f3ba8a90ca
trentmkelly/LessWrong-43k
LessWrong
The Mistake Script Here on Less Wrong, we have hopefully developed our ability to spot mistaken arguments. Suppose you're reading an article and you encounter a fallacy. What do you do? Consider the following script: 1. Reread the argument to determine whether it's really an error. (If not, resume reading.) 2. Verify that the error is relevant to the point of the article. (If not, resume reading.) 3. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't. This script seems intuitively correct, and many people follow a close approximation of it. However, following this script is very bad, because the judgement in step (3) is tainted: you are more likely to continue reading the article if you agree with its conclusion than if you don't. If you disagreed with the article, then you were also more likely to have spotted the mistake in the first place. These two biases can cause you to unknowingly avoid reading anything you disagree with, which makes you strongly resist changing your beliefs. Long articles almost always include some bad arguments, even when their conclusion is correct. We can greatly improve this script with an explicit countermeasure: 1. Reread the argument to determine whether it's really an error. (If not, resume reading.) 2. Verify that the error is relevant to the point of the article. (If not, resume reading.) 3. Decide whether you agree with the article's conclusion. If you are sure you do, stop reading. If you aren't sure what the conclusion is or aren't sure you agree with it, continue. 4. Decide whether the remainder of the article is worth reading despite the error. Resume reading or don't. This extra step protects us from confirmation bias and the "echo chamber" effect. We might try adding more steps, to reduce bias even further: 1. Reread the argument to determine whether it's really an error. (If not, resume reading.) 2. Verify that the error is relevant to the point of the article. (If not, r
a70acc8e-5a4b-460d-b037-64c85a5fe7bd
trentmkelly/LessWrong-43k
LessWrong
Suggest alternate names for the "Singularity Institute" Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.) Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change." How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental."  LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it. Here are some possible alternatives, without trying too hard:   * The Center for AI Safety * The I.J. Good Institute * Beneficial Architectures Research * A.I. Impacts Research   We almost certainly won't change our name within the next year,
9dea2161-0c43-479c-b8c6-2f8ebde37d39
trentmkelly/LessWrong-43k
LessWrong
How teams went about their research at AI Safety Camp edition 5 AI Safety Camp connects new collaborators worldwide to discuss and decide on a concrete research proposal, gear up online as a team, and try their hand at AI safety research* during intensive coworking sprints. Six teams formed at our recent 5-month virtual camp. Below are their explanations. Each team has summarised their analysis and experiments, and presented their findings at our final online weekend together. Some published a paper or post since. Most are continuing work, so expect a few more detailed and refined write-ups down the line. Modularity Loss Function Team members:  Logan Smith, Viktor Rehnberg, Vlado Baca, Philip Blagoveschensky, Viktor Petukhov External collaborators:  Gurkenglas Making neural networks (NNs) more modular may improve their interpretability. If we cluster neurons or weights together according to their different functions, we can analyze each cluster individually. Once we better understand the clusters that make up a NN, we can better understand the whole.  To that end, we experimented with pairwise distances according to the neuron’s jacobian correlation, coactivations, and estimated mutual information. These metrics can be plugged into spectral clustering algorithms to optimize for modules in the network; however, having a modular NN does not equate to a more interpretable one. We investigated task-based masking methods to test for modularity as well as neuron group activation (via Google Dream) in order to test for these modules being more interpretable than an equivalent amount of neurons. We ran out of time before fitting all the pieces together, but are intending on working on it more over the summer. Presentation on final weekend (slides) ---------------------------------------- Cooperativity & Common Pool Resources  Team members:  Quinn Doughtery, Ben Greenberg, Ariel Kwiatkowski In environments with common pool resources, a typical failure mode is the tragedy of the commons, wherein agents exploit the scarce public
cfe4e68f-7e90-480d-a7f3-f03e1d89bc76
StampyAI/alignment-research-dataset/arbital
Arbital
Math 1 A reader at the **Math 1** level has enough mathematical ability to encompass "good at math" in a colloquial sense. They know at least some basic algebra, and how to apply algebraic thinking to problems in some contexts. If you threw a simple to moderate math puzzle at them, they could probably figure out what they were supposed to do without balking and saying "nope, I don't know how to do this". ## Writing for a Math 1 audience At this level, you can start to use letters to represent numbers as variables and manipulate them directly.
1566cd82-6458-4088-94b8-5ed9b4d59197
trentmkelly/LessWrong-43k
LessWrong
Critiques of the Agent Foundations agenda? What are some substantial critiques of the agent foundations research agenda? Where by agent foundations I am referrring the area of research referred to by Critch in this post, which I understand as developing concepts and theoretical solutions for idealized problems related to AI safety such as logical induction.
6d857398-8965-49e4-9916-830ff900cc95
trentmkelly/LessWrong-43k
LessWrong
Self medicating for Schizophrenia with - cigarettes ?
b2bd5085-beda-4955-ac85-cfbc72e6e927
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Do Earths with slower economic growth have a better chance at FAI? I was raised as a good and proper child of the Enlightenment who grew up reading *The Incredible Bread Machine* and *A Step Farther Out,* taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants. And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress:  Economic growth = good. But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI.  So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total *serial*research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.  Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces.  This means that UFAI parallelizes better than FAI.  UFAI also probably benefits from brute-force computing power more than FAI.  Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.  I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches. Roughly, it seems to me like higher economic growth *speeds up time* and this is not a good thing.  I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035. I have various cute ideas for things which could improve a country's economic growth.  The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people.  I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore.  However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny.  And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first).  Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly.  (An extremely understandable position which would typically be taken by good and virtuous people). Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my *mainline* visualization has the Great Stagnation being good news.  I certainly *wish* that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality. To head off some obvious types of bad reasoning in advance:  Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem *arguendo* is that UFAI parallelizes more easily. Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier.  But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI.  Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research.  If you concentrate on some single good effect on *blah* and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance. So I pose the question:  "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"?  So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research". This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community. EDIT:  To be clear, the idea is not that trying to *deliberately slow* world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky.  The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth. EDIT 2:  Carl Shulman's opinion can be found on the [Facebook discussion here](https://www.facebook.com/yudkowsky/posts/10151665252179228).
e6ba9482-cf55-4075-ac4d-e280e365acd9
trentmkelly/LessWrong-43k
LessWrong
Visual Reference for Frontier Large Language Models Hopefully this can be a helpful visual reference for the development and features of frontier large language models in the last year-ish. We are always open to feedback on how the reference could be improved. FAQ: Q: Which models/companies are included? A: We include LLMs that are noteworthy in capabilities, price, or tech advancement. Q: What constitutes a new model versus an update to an existing model? A: We rely primarily on how the developers describe and market it. Q: Image inputs and outputs typically use non-LLM models that aren’t listed here, so why include them? A: It takes some technical knowledge to even realize the distinction, and multimodality and proprietary secrecy will probably increase going forward. Suffice it to say these are features that users of the platform care about. Similar things can be said for the inclusion of realtime voice, document upload, Copilot, and web browsing tags.
f26f0510-a93e-4f0f-824a-40a47900e202
trentmkelly/LessWrong-43k
LessWrong
Reductionism sequence now available in audio format The sequence "Reductionism", which includes the subsequences "Joy in the Merely Real" and "Zombies", is now available as a professionally read podcast. Thanks to those who've been listening, let us know how your experience has been thus far and what you think of the service by dropping an email to support@castify.co.  
8c9e46e1-8266-4cf1-96cd-5ce24b1904ce
trentmkelly/LessWrong-43k
LessWrong
More on Twitter and Algorithms Previously: The Changing Face of Twitter Right after I came out with a bunch of speculations about Twitter and its algorithm, we got a whole bunch of concrete info detailing exactly how much of Twitter’s algorithms work. Thus, it makes sense to follow up and see what we have learned about Twitter since then. We no longer have to speculate about what might get rewarded. We can check. WE HAVE THE ALGORITHM We have better data now. Twitter ‘open sourced’ its algorithm – the quote marks are because we are missing some of the details necessary to recreate the whole algorithm. There is still a lot of useful information. You can find the announcement here and the GitHub depot here. Brandon Gorrell describes the algorithm at Pirate Wires. Here are the parts of the announcement I found most important. > The foundation of Twitter’s recommendations is a set of core models and features that extract latent information from Tweet, user, and engagement data. These models aim to answer important questions about the Twitter network, such as, “What is the probability you will interact with another user in the future?” or, “What are the communities on Twitter and what are trending Tweets within them?” Answering these questions accurately enables Twitter to deliver more relevant recommendations. > > The recommendation pipeline is made up of three main stages that consume these features:  > > 1. Fetch the best Tweets from different recommendation sources in a process called candidate sourcing. > 2. Rank each Tweet using a machine learning model. > 3. Apply heuristics and filters, such as filtering out Tweets from users you’ve blocked, NSFW content, and Tweets you’ve already seen. > > … > > Today, the For You timeline consists of 50% In-Network Tweets and 50% Out-of-Network Tweets on average, though this may vary from user to user. > > … > > The most important component in ranking In-Network Tweets is Real Graph. Real Graph is a model which predicts the likelihood of engag
37bca80e-31c2-4efa-83ee-5b013b7e36c5
trentmkelly/LessWrong-43k
LessWrong
SB 1047: Final Takes and Also AB 3211 This is the endgame. Very soon the session will end, and various bills either will or won’t head to Newsom’s desk. Some will then get signed and become law. Time is rapidly running out to have your voice impact that decision. Since my last weekly, we got a variety of people coming in to stand for or against the final version of SB 1047. There could still be more, but probably all the major players have spoken at this point. So here, today, I’m going to round up all that rhetoric, all those positions, in one place. After this, I plan to be much more stingy about talking about the whole thing, and only cover important new arguments or major news. I’m not going to get into the weeds arguing about the merits of SB 1047 – I stand by my analysis in the Guide to SB 1047, and the reasons I believe it is a good bill, sir. I do however look at the revised AB 3211. I was planning on letting that one go, but it turns out it has a key backer, and thus seems far more worthy of our attention. THE MEDIA I saw two major media positions taken, one pro and one anti. Neither worried itself about the details of the bill contents. The Los Angeles Times Editorial Board endorses SB 1047, since the Federal Government is not going to step up, and using an outside view and big picture analysis. I doubt they thought much about the bill’s implementation details. The Economist is opposed, in a quite bad editorial calling belief in the possibility of a catastrophic harm ‘quasi-religious’ without argument, and uses that to dismiss the bill, instead calling for regulations that address mundane harms. That’s actually it. OPENAI OPPOSES SB 1047 The first half of the story is that OpenAI came out publicly against SB 1047. They took four pages to state its only criticism in what could have and should have been a Tweet: That it is a state bill and they would prefer this be handled at the Federal level. To which, I say, okay, I agree that would have been first best and that is one of the bes
4dd3e8e5-850f-4eba-a9c4-9d81b209f7e8
trentmkelly/LessWrong-43k
LessWrong
The Story CFAR In addition to to my donation to MIRI, I am giving $4000 to CFAR, the Center for Applied Rationality, as part of their annual fundraiser. I believe that CFAR does excellent and important work, and that this fundraiser comes at a key point where an investment now can pay large returns in increased capacity. I am splitting my donation and giving to both organizations for three reasons. I want to meaningfully share my private information and endorse both causes. I want to highlight this time as especially high leverage due to the opportunity to purchase a permanent home. And importantly, CFAR and its principles have provided and in the future will provide direct personal benefits, so it’s good and right to give my share of support to the enterprise. As with MIRI, you should do your own work and make your own decision on whether a donation is a good idea. You need to decide if the cause of teaching rationality is worthy, either in the name of AI safety or for its own sake, and whether CFAR is an effective way to advance that goal. I will share my private information and experiences, to better aid others in deciding whether to donate and whether to consider attending a workshop, which I also encourage. Here are links to CFAR’s 2017 retrospective,  impact estimate, and plans for 2018. I My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed. I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountere
f03408a1-05d9-4d14-9a58-4ecbae72b26a
trentmkelly/LessWrong-43k
LessWrong
My Criticism of Singular Learning Theory In this post, I will briefly give my criticism of Singular Learning Theory (SLT), and explain why I am skeptical of its significance. I will especially focus on the question of generalisation --- I do not believe that SLT offers any explanation of generalisation in neural networks. I will also briefly mention some of my other criticisms of SLT, describe some alternative solutions to the problems that SLT aims to tackle, and describe some related research problems which I would be more excited about. (I have been meaning to write this for almost 6 months now, since I attended the SLT workshop last June, but things have kept coming in the way.) For an overview of SLT, see this sequence. This post will also refer to the results described in this post, and will also occasionally touch on VC theory. However, I have tried to make it mostly self-contained. The Mystery of Generalisation First of all, what is the mystery of generalisation? The issue is this; neural networks are highly expressive, and typically overparameterised. In particular, when a real-world neural network is trained on a real-world dataset, it is typically the case that this network is able to express many functions which would fit the training data well, but which would generalise poorly. Moreover, among all functions which do fit the training data, there are more functions (by number) that generalise poorly, than functions that generalise well. And yet neural networks will typically find functions that generalise well. Why is this? To make this point more intuitive, suppose we have a 500,000-degree polynomial, and that we fit this to 50,000 data points. In this case, we have 450,000 degrees of freedom, and we should by default expect to end up with a function which generalises very poorly. But when we train a neural network with 500,000 parameters on 50,000 MNIST images, we end up with a neural network that generalises well. Moreover, adding more parameters to the neural network will typically ma
33759f8a-2645-4eba-afc4-5d6e4bf1b15d
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Metalearning & Induction Heads [rough early thoughts] hey this is chris and today i want to talk about how induction heads seem to cause metal learning this is preliminary work but i think it's one of the coolest things that we we seem to be discovering about transformer circuits and so i'm really excited to chat about it if you haven't watched my colleague catherine's uh previous video on how induction heads seem to cause a bump in the loss curve um you should really check that out before watching this video because this video isn't going to make a ton of sense without it now meta learning is probably at least to me it's it's the most impressive thing about large language models it's really um the thing that's most surprised me about them and about i think probably it's in some ways maybe the most surprising thing about deep learning um in the last number of years um and in fact uh it's i think so striking that um the gpd three paper decided to go and make it really the title of the paper is that language models are few shot learners so the idea here is we you know when we when we train models normally we expect them to learn over training but with these large language models we're starting to see that you can go and put examples in the prompt or in the context and they'll learn as they read so they can go and do later things in the context that they they couldn't do earlier in the context and this has led to all sorts of interesting things like you can go and do so-called prompt design where you go and you train models um or you design these these little prompts that instead of training the model you can you can sort of get it to do what you want by just designing the prompt and in fact um there was this little meme with uh andre uh carpathi um that you know maybe maybe the future of computing is that we'll just design the prompt um i think that's that's somewhat joking but and you know i think the fact that we're sort of joking about things like that sort of um i think is it sort of striking about how interesting a property um metal learning is um and it's deeply mysterious like we we design neural networks so that they'll be able to go and learn but somehow when you train large enough language models you seem to get meta learning for free um and so that's that's really interesting and you know i really want to know what's going on um so there's a previous paper um this is um the the kaplan at all skilling and skilling laws paper um and i think it has a really clever idea for how to think about uh meta learning so normally when we think about models learning we can we can look at a loss curve um and it you could think of this um you know the curve describing how the model learns over time so you know we have our training step um and as we train the model our loss goes down and actually and in this car lost curve it's for a two layer uh transformer we can see the induction bump from the previous video so the idea that kaplan all have is that you can also go and look at this not as a function of training step but as a function of token index you can take the fully trained model right at the end here and you can ask um you know okay how good is it at predicting the first token well you know it's not very good at predicting the first token how good is it at predicting the second token well it's also not very good at predicting the second token but um you know so these are you know the unigram statistics and then the bigram statistics and and the entropies of those but then you know as we go on um we start uh to get much much later in the in the prompt and and you've already sort of squeezed out all of the the short term distances but the the loss continues to go down and and so if the loss is continuing to go down somehow that means we're getting information from earlier tokens and using it to go and predict later tokens and so this curve in some ways it's a little bit like a learning curve but it's us learning over the context it's learning from earlier tokens in the context how to go and predict later tokens in the context now you have to average over lots of examples to see this but um if you average over over lots of examples in your training set and then just look at token index you start to see this curve um and i think this is a little bit of a grandeus way to describe it but i'm not the one who came up with the idea so i feel free to describe it that way um you could sort of think of this as a meta learning curve so you could think of this as being your learning curve and you could think of this as being your meta learning curve it's describing how you're you're going and uh how your metal learning is progressing um but you could think of both of those as really being slices or projections of a more general 2d picture and so here we have our model over training so here's the snapshots that we saved as the model was training and we can also look at the token index on this axis so um if we go and we average over this axis and project it all down we'd get the loss curve and if we take the final slice here um we get the the the the curve describing how good we are predicting different tokens um at the end of training so this is this is sort of a generalization of the the diagrams we saw before and it allows us to get this this nice 2d picture and see get an overview of what's going on and and yeah so i think that the most obvious and striking thing when you look at this diagram or at least to me the most striking thing is that there seems to be kind of a discontinuity um and we can zoom in on it and that corresponds to the bump that we saw earlier and that we discussed in the previous video where induction heads form so we can see induction heads forming in this in this plot so something that we can then do is we can say okay uh you know we'd like to somehow summarize how good the model is at meta learning into a single number that would be a really useful thing if we could do it um and one way we could try and do it is we could go and take you know how good we are of predicting say the 50th token um and we can look at that for different points in training so we can write at the beginning of training and then after a little bit and so on and uh when we learn about induction heads we see a little drop but not a very big one and then we can go and take um how good we are at predicting a token much later in training or much later in the context and we can see um we're better at predicting it and we have a much bigger drop and if we subtract those two numbers you could think of that as kind of being the amount of metal learning we're doing between token 50 and token 500. um so this is in some sense a measure of metal learning and the the lower it is um the better we are at meta learning and we see that there's this really abrupt drop um which happens right at the bump um right at the point where we think that in induction heads are forming um so that's really interesting and suspicious um and i don't i don't know about you but this is this is like you know this is a a really sharp curve like it's almost it's it's you know it's not exactly like this there's a little bit of maybe bending here but it's it's almost just an abrupt um uh discontinuity uh uh i mean it's pretty discontinuous so that's that's kind of an unusual thing um and uh yes seeing things like that uh it's very striking to me okay so um one question you might ask is you know how significant is this right like how how big a deal is it that we're getting um you know this negative 0.4 uh gnats of metal learning um our losses is measured in gnats from from information theory and what does that mean and i think there's a few ways you could interpret it so in the case of this model it's an 11th of the loss and so one way you could think about it is that means that when we predict token 500 um it's almost as though we get to go and predict like maybe every 11th token and we're able to go we get sort of get to for free just magically know the right answer and say it with 100 probability that would cause the same uh you know losing reduction of an 11th of our loss um another interpretation of it is uh it's 0.4 knots per token um but gnats are kind of a tricky way to think about things um i prefer to think about things in information theory in terms of bits so let's convert to bits that's about 0.5 bits a token which is about one bit every two tokens and remember that a bit um is enough information to distinguish between two things so it ends up being meaning something like you could sample the model twice and take the better token or take the better sample every two tokens that would that would be another thing that it would mean to go and drop by 0.4 not so i think it really is a very meaningful very meaningful drop and and uh you know we've we're already pretty far out in the context by token 50 and we're still getting that much more so um that's pretty impressive okay so another way we could picture this is uh in some ways the important thing is you know how much more information are we as we move to later tokens how much more information are we squeezing out and reducing our loss by and before we discover uh induction heads we see that you know we pretty early on um you know it basically becomes zero and we're not getting very much more and then all of a sudden so this is sorry this is this is the derivative um with respect to log token log because it seems like um they're sort of uh yeah it seems like probably the the order of magnitude of tokens that you have before you seems to be the the more important thing and if you look at these curves they sort of seem very linear on a log axis but not on a linear axis so we'll take that with respect to the log token and and yeah and it seems like before we discover induction heads you know maybe the first 20 or 40 tokens 50 tokens um you're still getting pretty significant learning or pretty pretty significant reductions in your loss but it pretty quickly levels out after after token 50 and as you go further down you're you're not learning much um whereas uh after induction heads form we we continue to see these really significant um reductions in loss um so that's really interesting and then if we take a further derivative and we ask okay well you know how is our ability to go and increase so we have our first derivative which tells us as we go this direction how is our ability to put tokens um going in and increasing and if we take a derivative this way we can ask uh how is our ability to go and predict tokens um improving our ability to get better at predicting tokens how is that improving as we go and we move this way and we can see that there's you know i mean you can just visually see it here but it's nice to confirm there's this extremely extremely um uh you know sharp discontinuity that really really dominates the story so you know to first order the story of metal learning is that in the span of a few snapshots um which is really just a few hundred steps of training and we go from not being very good at it to being quite good at it and that's that's really the story of metal learning this is for a small model and this is only for a two layer attention only transformer where it's really easy to study things um but we can also look at large models and large models also seem to have a bump um it's right here um and that's that about and now we're measuring in tokens and so that's at about 10 to the nine tokens and it turns out that if you go and measure the same uh notion of of doing a difference between how good you are predicting two tokens to go and look at get sort of a meta learning score and right at ten to the nine tokens and we see an abrupt drop an abrupt discontinuous drop and it seems to not matter that much how big your model is um across uh you know several orders of magnitude and they all drop in a pretty similar way now before right at the beginning the large models are better at predicting than these small models are but after the drop um the metal learning scores really dominated just by which side of the drop you're on which side is this bump um you're on and not matter that much how large your model is that's kind of the opposite of what i usually think about model size usually i expect model size to just make everything better but here we have something that to a pretty significant extent is it's just the same amount regardless of model size so that's interesting um in any case that's kind of uh rambling a little bit but i think that the striking thing is that this is still happening in large models and so the the thing that we we you know to me that's crying out when i see this pattern is what the hell was happening when we see this drop now of course i've sort of already given away the theory we have which is that it's induction heads but um you know we'll we'll we'll talk about that more in a sec um uh but you know i i i think that it's it's really you know we and maybe maybe one reason why i'm so struck by this is i don't know about you but i'm used to seeing weird things in law scares in fact this is like a pretty smooth loss curve all things considered and you know it's not that unusual to have random bumps and quirks in your lost curve um but here this this thing that seems at first glance like a relatively um you know not that big a deal thing and and and you know everyday phenomena turns out to be doing something like really the the most interesting property of models to me um is is like undergoing this like phase change at that point it's it's dramatic um and so that's really interesting um okay so the theory that we're gonna have is that this is driven by induction heads now remember that an induction head uh it's kind of doing a nearest neighbor's lookup so it says okay this is my present token and well actually maybe i'll i might even include some previous tokens um but let's just focus on the on the present token for a second um so here with the first paragraph of harry potter because we i find it fun to use the first paragraph harry potter and and we have the token d okay so now we're gonna go and look for the token d but shifted one forward so we're gonna look at what happens next um and then we're gonna look at what happened next and increase the probability that that is the next token so we did this kind of nearest neighbors look up over um our present token and surrounding context and then looked for something similar shifted one forward and used that to go and predict the next token um and so this is this kind of nearest neighbor um search over our context is is really the thing that induction heads add and and you have to remember that in order to do this you have to have a previous attention head that can go and do this shifting so you need to have two attention heads in your start cut to go and implement this first you have to have one attention that can shift information forward and then you have the attention head that goes and looks uses that shifted information to go and match something to the shifted version and then go in and increase the logic so that's what an induction head has is and it's very natural that something like this could be used for meta learning because it's allowing you to search and find similar situations and then do what happened last time when you had a similar situation um okay so the hypothesis is that induction heads drive metal learning and um we're gonna have a few pieces of evidence i think the big ones are for small models well small attention only models we can really show that uh induction heads are causing the bump and they're causing metal learning um and then for larger models the story's gonna be a little bit harder we'll still be able to say that metal learning forms of the bump um and we'll still be able to go and say that high induction heads form at the same time as the metal learning forms but it'll have to it'll only be a correlational argument so um this does leave open some possibilities that um maybe maybe there's additional things that are happening at that bump but that weird transition um and it could be the case that large models are different so we can we can sort of rigorously understand this in small models in large models we just have correlational evidence and it could be that there's other things we'll talk about that more in a minute okay so um for for small models for small attention only models and one thing that's nice is you can just sort of mathematically define induction heads because remember that they have um if you if you've watched our early videos where we have this really nice uh analytical framework for understanding these small models uh an induction head is one that has positive ov um output value uh eigenvalues and also has positive qk eigenvalues attending to the previous token so um that's kind of a yeah that gives us a nice mathematical definition and we can turn that into a score we we talked about this more in the previous video with catherine but you can turn this into a measure of how much something is an induction head by looking at the fraction of the eigenvalues that are positive and then going and taking a minimum from ov and qk and and so here what we have is we've colored each one of these lines is a induction head and we're seeing if you knock it out how does the metal learning score change so um a positive value means that the meta learning score increases but remember the negative value um it's a delta and loss a lower value um was what it meant that that we were better at at metal learning so um having a higher value here means that the induction head is improving our ability to do metal learning having a lower value means that knocking it out actually seems to um uh seems to well knocking it improves it so it's sort of doing the opposite and we'll talk about that in a little bit as well um and then they're colored by this this score that tells us how much they are induction at so the obvious story here is well we have this one as one head that suddenly starts to become an induction head and becomes right at the bump starts to go and cause uh yeah really really drive metal learning um i should say this this line here is the start of the bump um and this line here is the end of the bump um and there's there's some other heads that uh also uh become induction heads and uh contribute to metal learning now there's this weird thing where they they all sort of um basically all of the heads seem to sort of rush to contribute to metal learning and then stop and then many work some recover the induction heads recover um we don't know exactly what's going on there but the theory is probably that they you know once they figure out they can do induction heads a lot of attention heads try to do it because it'd be very useful and then there's one one winner or a few winners and the others go and do something else um now there's one outlier here which is sort of an induction head it's a little bit borderline um uh maybe maybe sort of weekly scores on this induction head score um so it has some negative eigenvalues some positive ones and it's really not doing metal earnings we don't know what's up with that um another thing you might ask is you know how can there be heads that are that are sort of doing the opposite of metal learning well my guess is that these are heads um that when you knock them out damage the model um more generally uh and um damaging the model more generally might just mean that you're you're predicting tokens more more uniformly um uh so that that would be uh uh uh sorry um knocking out more might mean that your uh might i mean i guess it's sort of clear that this theory is very special but it's not something i thought through very carefully but it it could be that um this this destroys your ability to engram statistics or something like this um and so the only thing that remains maybe is your is your metal earning and and and it's able to go and contribute more to the loss so that might be one theory like if you're if you're bad at predicting early tokens then the meta learning score would would increase um but yeah i think the significant story here to me is that very clearly the induction heads are the ones that are driving better learning um and because this is an attention only model there isn't anything else that could be driving metal learning so we we know that in this case the model is really um yeah the metal learning is really coming from the induction heads um yeah so that so yeah okay so there's a few other arguments for why um you might think that uh in the case of small attention only models um the the induction heads are really driving this metal learning change um another one is just that we really understand these models and there isn't really anything else that seems like it could be driving metal learning um and also uh we we really carefully in the previous video were able to analyze the induction head bump um which is seems to correspond to this this meta-learning phase change um and in that case we were able to do things like look at individual token losses and study how how those tokens were um yeah were changing and those were were both cases that sort of seem like they could you could interpret them as kinds of metal learning um and they they all very naturally flowed from uh having induction heads so uh if you haven't watched the previous video again i'll encourage you to go and go and watch it um now for large models i think this is a trickier argument um when you have thousands of attention heads um and a large model that's more expensive to run it starts to become annoying to go um and do ablations of every single head and so we're left instead uh with uh a more correlational argument and we can see that uh metal learning develops uh right at token 10 to the nine or right around token 10 to the nine and we can see that that's when induction heads form um and this is across a wide range of model sizes um the score is the score of the attention head that most matches um an empirical estimate so we can't use the remember that previously in the small models we could um i mentioned we could we could go and detect uh induction heads based on their eigenvalues that doesn't work in large models because of mlps and so instead uh we have to go uh and and come up with some empirical way to detect induction heads and what we do is we just create a string of some random tokens um and uh we see if they if it's able to go and look at the look at the next token and because it's a sequence of random tokens and the only way that it could sort of go and look at the token that previously followed the token the present token is if it's actually doing induction it couldn't be relying on other statistics because it's it's a it's a random sequence so it really has to be doing this kind of search through the context and then look what happens next that's that's the only way um and these attentions are really accurate at it right like they're some of these are getting up to like you know certainly in the 90 90 percentiles um and then they start to fall i think that's because there's multiple inductions that are taking on uh different roles and so you know maybe one one matches when there's like multiple tokens within a word and another one matches in other cases and so on so then they they start to become you know only only look at a smaller fraction of those but um but yeah there's so at ten to the nine they form um and that's when the metal learning forms as well so that's that's kind of an actual argument right that sort of seems to me um you know it would be a very shocking correlation um uh to just happen by by happenstance um now it could be um that there's additional things happening there um uh but we we know in the small model case that this is really the thing that's driving it and it certainly looks very similar here we know that the same thing's happening and the same results happening and so it's natural to draw that inference now could it be the case um that there are additional things happening here um yes it could be and we'll talk about that more later but it could be that there are additional things driving uh meta learning and large models that happen form at the same time as induction heads um and that we haven't yet noticed and we can we'll talk about how plausible that is later but um yet first order i think uh it looks like the uh uh yeah it looks like induction heads are probably a big part of the of what's going on um one other thing that's maybe worth mentioning this is really an aside it is kind of cute um this kind of there's sort of this oscillating feeling um around the induction head bump and it actually seems like that might be real it's not something that we studied carefully um but it does seem uh it's the same thing that we were seeing earlier with um uh with having some attention heads sort of over over correct or something um and have to bounce around a little bit um that may or may not be yeah it's a theory okay so um okay so our hypothesis is that induction heads uh are implementing uh meta learning and transformers and that's that's kind of a wild claim to me um so it's it i think it sort of is clear or sort of makes sense that induction heads could go and uh predict the later tokens so um or predict copy token so like here we have the first paragraph of harry potter um you know we we're at uh we're at this token d we look back we see that there's an errors we're like ah it's going to be errs again and we increase the probability of that token then we're on the token errors and we look back at errors and we see ah the next token was li and we can we predict li and we we get a better loss than that so and it makes sense that induction heads uh can can copy repeated chunks of text um both you know very short terms of phrases um or names or things like that and also also longer things and uh and i think it also you know it also makes sense that they can like we saw in in one of the previous videos that there was um a cross language induction head that looked for trend you know words that are in different languages and then looks one forward and allows you to go and and generate translated versions of text so and you know there's other things like this but and and you know it makes sense that those would would cause your loss to go down um later later in your context but you know is it really plausible that those could be explaining metal learning um like you know these large language models they can do all these things that uh are are different from uh you know they they aren't just going and root copying things they they learn functions and they can go and and do all sorts of um you know interesting um you know generalizations um on top of uh the the thing that they learned so uh yeah so could it be that that these induction heads are doing the same thing um well okay so one one thing you could think of is that maybe the more general thing that induction heads are doing is there are kind of metal learning nearest neighbors they're doing a nearest neighbor search over the context and that could be you know going and matching literal previous token and literal next token um but they can also be softer and they can also copy more flexible things we we saw in a previous video not an induction head but a copying head that was was copying um things like gender intense and plurality um to go and and keep sort of manage agreement between adjacent sentences and stuff like this well um you can you could have induction heads be be soft as well the translation head was one of those and um it turns out these soft induction heads can also help you learn functions um and that's the thing that really made me start to um maybe at least it was it was the thing that knocked down the last barrier to me of thinking that induction heads might actually be the thing that's driving metal learning um sorry so um what we have here uh is uh the uh uh yeah a little toy problem that i set up so um we have uh a we're trying to simulate sort of a pattern matching problem that we know for sure the model um has never seen before because i i made it up and it's very silly um and will allow us to go and explore the role of some softer induction heads in going and learning functions okay so the the puzzle here um or the the the problem here is um sometimes the first word is a color and sometimes it's a month so sometimes it's a color sometimes it's a month and sometimes the second word is an animal and sometimes it's a fruit and then we interpret each one of those as a binary variable and we do xor so green lizard is false um because it's a it's a color and an animal and similarly a month and a fruit is false but the other combinations um so a color and a fruit will be true and a month and an animal will be true okay so this is a pretty arbitrary function and the thing that we'd like to see is when we are on a colon the model should look for a previous instance that is the same um conceptually the same as our situation and then go and look at what the answer is so here we have a month animal and it attends back to a previous month animal and says ah it was true in fact it's looking for one where the month is the same so it's april and and april and i guess dogs and cats are kind of similar so maybe that's why it likes it it's it's also um looking at marched lizard a little bit which is also a month animal um but it's putting less weight there um because i guess it's uh less it's less of a close match from its perspective it's working on on high level linguistic features that make sense um okay if we go to january wolf um this is also a month animal and it wants to get here's one that's january as well so it goes and it looks it true and here's another one looks it true oh over here we have march lion um i guess wolf and lion are kind of similar somebody likes that uh and here we have a wolf he said it's true um but it's successfully going and looking these cases okay let's look at one that isn't um uh so here we have a color and a fruit so purple lemon well okay we attend to red cherry that's true um we go to purple apple that's true we go to gray lemon that's true so we're looking at these previous cases of this of this of the function um okay here we have another oh no here we have a a month fruit um and we when we look here we have um october cherry um and it's false um now this one uh is actually an error it's um it's looking at the wrong thing um it's probably because we have september here and we have september here and it likes that and then we also have cherry on the previous line so it's attending to the wrong place but it's noticing um that there's a cherry oh i guess in this case there was also a cherry on the previous line so maybe that's why i got confused but it's giving most of its probability to a correct case and it's just putting a little bit of probability on this this incorrect one so we're still going to get the right answer um oh i guess we're also putting some some probability here so uh yeah but most the one that's getting the highest probability is still correct uh let's keep going um what about yeah we could we could look at this one now um okay we have red cherry true gray cherry true gray strawberry true red apple true so all those are correct cases to attend to and use um and now we're back at a month animal and yeah we're going and seeing july fish is the one that's getting the most april cat that's another valid one we're giving a little bit of probability actually quite a bit of probability to this one which is not a correct case to attend to but mostly we're attending to the correct cases so we're still going to go and get the answer right um so the thing that i think is so cool about this is this is a completely made-up nonsense function and the model has learned it um by going and using previous examples um uh and it's you know it's not just uh uh uh well okay i guess i'm i'm sort of making a fool of myself a little bit but um here i it's not just you know i'm showing you ones later on but but it's also early right so here we have red frog and then we're looking to green frog and blue lizard and those are valid cases we're putting a little bit of probability on one of the wrong ones um and this one we put a little bit of we put quite a bit of probability on the wrong one but we're still putting probability on two correct ones um so that might still win out and so this also works early in the context and works better later in the context but it also works early in the context um and we can look at the effect on the logits and see you know when when are they made more probable and we can see that you know this one is made more probable by a correct match and um by correct matches and correct matches and there's all of these cases and it's it's it's working um so it really does seem like these these soft deduction heads are able to go and allow for the model to learn learn functions like this which is to me really cool um because the ability to to learn functions and pattern match like this um i think was was one of the things about large language models to me that was was sort of most um most demonstrated that they were really doing in context meta learning um and so now we have sort of a theory of of why that's the case okay um so that's that hopefully that makes it at least seem plausible that uh induction heads could be doing at least a lot of metal learning um could other things be contributing yes so it's certainly possible that in large models other things um could be contributing now uh i think there's two kinds of barriers to that as a theory one is it really seems like they would have to be forming more or less simultaneously with induction hats um so we know that uh that when induction hence form um that is when uh the meta-learning forms as well and so if there's something else um it seems like they have to form there as well um it also seems like we need to explain uh why so we know that that in in the small intentionally models metal learning is really driven by induction heads um we can sort of just look at everything and go and show that it's the induction heads that are driving things and so i think you have to explain then why is it um that first of all whatever you you're hypothesizing has to not form in these small models that seems reasonable could be that there's things that just don't form in small models and do form and large models but then why is it that the same we get the same this constant amount of metal learning regardless of model size um why is it that it's not that the large models aren't doing better at metal learning um now in some sense uh there's there's a way in which this is a little bit deceiving um which is they are they're they're getting the same number of bits from metal learning but their loss is lower and so those are sort of maybe a harder bits of information and we know that large models have more induction heads and more soft induction heads um like the translation head or the the pattern matching head those don't exist in really tiny models um so so i think that's a little bit of a strange argument that i'm making here but um it is very suspicious to me the same amount of metal learning gets learned in all models um and that does seem like an argument for i have no idea why that should be the case but it to the extent that it's true it does point in the direction that somehow whatever is going on in all these models is the same um but who knows i think that is very much an open question okay now another question that i've been wondering about is is it possible that action heads are the reason transformers do better than lstms okay so um let me unpack that a little bit because that's that's maybe sort of an out of the out of the blue question um well kaplan at all uh notice that um this is the skilling laws paper um from 2020 and they have a plot that yeah it's not very emphasized but they're the ones we got this whole idea of going and looking at um loss as a function of token index from um well they make a really interesting observation which is if you look at transformers the loss continues to go down but if you look at lstms of the same size for a while they do kind of similarly but then they flatten out um and they tend to flatten out somewhere before token 100 so we've taken 100 here and somewhere before then or around then i think at least a little bit before then um they flatten out okay well that's kind of interesting because we saw something flattened out earlier we saw um that before induction heads form we flatten out you know we start to flatten it probably around token 40 or 50 and you know by token 100 we're really starting to flatten out we've a little bit get a little bit more here um but we're we're really significantly flattening out and are basically flat we're significantly flat i think from you know somewhere between token 1500 to onwards so you know that kind of rhymes with this result it's not um certainly certainly not uh errol proof um but it is it is really suspicious um to me that uh you know these um the lstms are happening flooded at the same you know roughly the same time that uh transformers before they develop induction heads do and there's another reason why i think it's it's a little bit suspicious which is uh induction heads are fundamentally a mechanism that it seems like transformers couldn't implement because they're searching over the previous context um and so you can't do that with a constant amount of time um per step because you um you yeah you're it it's you know probably probably it's before you have to do some compute for every every previous step or another way i think about this is that going and doing um doing search um for for every token over all previous tokens um natalie that's going to be the on squared but uh transformers only have on compute and so um and and if you were really clever maybe you could make it o n log n but i i think probably uh you know neural nets aren't discovering those algorithms and even if they were still still larger than o n so it really seems like that's something that it's it seems like lstms couldn't be doing um and so we we know that before uh induction heads form transformers are relying on these simple copying heads um and those do seem like something that uh lstms uh could be could be doing an alligator something analogous to you know just being like oh you know i saw this token previously probably it's gonna happen again um and so it it really yeah i don't know again this is this is speculation i don't you know i don't think we we rigorously know this is the case but it is kind of suspicious and you know i do wonder um so that was a fun little fun little thing um one other thing that i think is worth talking about um is well okay so at least now okay i think this is all a little um uh a little unclear but um if you put if you put your con your context on a log access these these curves these metal learning curves they do look pretty um they do look pretty linear and if you especially people you're lost on a log axis as well so it does seem like metal learning within a context is probably modeled pretty well um as a scaling law and there's a paper by by marcus hutter from deepmind um and uh he gives uh sort of a an argument that we should really expect models to follow scaling laws um at least with respect to data that as you go and give them more data um you should expect them to follow a scaling law um uh and something i found interesting as i was thinking about this is you know these models uh they follow okay so we seem to be observing scaling well you know the actually i think the case of of meta-learning with induction heads sort of matches the theoretical argument that um that marcus is giving better probably than the standard case of going and training models um so um the the theoretical stop and the and the other papers um that you you're observing a bunch of data points and if you've seen um if you've seen a particular problem before then in the future you get zero loss on it so um you're observing the sequence of of training examples and then um if you see the same thing again uh you go and you you you get it perfectly and otherwise you you get a bad loss and you have the same bad loss so um that seems very similar to having a context and seeing a bunch of tokens and if you see a token that matches something earlier in your context you can look at what happens next and get an a very very small loss um that seems very similar and so you can you can really take just all the arguments um that are in this paper and they run through for the case of metal learning um with with induction heads and so we should really really expect scaling laws so i thought that was kind of fun okay so in summary um meta learning seems to form really abruptly in the span of a few hundred steps um and induction heads form at the same time um so that's that's interesting um induction heads can be seen as a kind of nearest neighbor metal learning algorithm um and we saw that they could even even learn functions um and in addition to copying text um and induction had seemed to substantially cause meta-learning um certainly they form at the same time as metal learning and in small models they can really be shown to be causing it uh so uh yeah i don't know this is i guess i'm just excited about all this because it seems like it's getting really um you know potentially some deep insight or or or getting at some some of you know what seems to me one of the the biggest mysteries of of these large models um and so that was was really exciting to me uh anyways thanks for listening
e33de5f7-4c45-4f5e-9b5b-fe48e59adbc9
trentmkelly/LessWrong-43k
LessWrong
Limits of Bodily Autonomy Bodily autonomy is a strong consideration in favor of both abortion and optional vaccination. Because it is not the only consideration, however, many of us are in favor of abortion for anyone who wants one while also seeing some cases in which we favor mandatory vaccination. I've recently seen posts from friends supporting the right to abortion, such as shares of screenshots of this tweet: > If you don't accept that bodily autonomy is an essential unconditional liberty, it's a waste of time talking to you at all. No other liberties survive without that one, more fundamental than property rights: if you don't own yourself absolutely, you own nothing. > —@ScavengerEthic In some ways this is similar to the use of the slogan 'my body, my choice', but I don't think people generally interpret short phrases as a complete argument; a slogan often draws attention to a major consideration without claiming it's decisive. In this case, however, people are sharing statements that do claim to be a full argument, and make the case for abortion in a way that also makes the case against vaccination requirements. Q: A right to abortion isn't anything like a right to refuse vaccination! Their impact is very different, and there are lots of other considerations: vaccination prevents something contagious, pregnancy and childbirth can be deeply difficult, unpleasant, and dangerous, etc. A: I agree. Which is why we should make the case for these policies in a way that depends on those considerations, instead of resting the entire case on autonomy. Q: I do think bodily autonomy is an essential unconditional liberty, and I'm opposed to both abortion bans and vaccination requirements. A: That's a consistent position, but you're not my audience here. Q: A vaccine mandate doesn't mean you'll have one forced on you, it just means you can't go certain places. That's very different from threatening jail time for abortions. A: If the court had ruled that states could exclude people who h
ed4311f0-cff9-4bf0-b196-82b8529dcd18
StampyAI/alignment-research-dataset/arxiv
Arxiv
Emergent Road Rules In Multi-Agent Driving Environments 1 Introduction --------------- ![](https://media.arxiv-vanity.com/render-output/7815516/x1.png) Figure 1: Multi-agent Driving Environment We train agents to travel from a→b as quickly as possible with limited perception while avoiding collisions and find that “road rules” such as lane following and traffic light usage emerge. Public roads are significantly more safe and efficient when equipped with conventions restricting how one may use the roads. These conventions are, to some extent, arbitrary. For instance, a “drive on the left side of the road” convention is, practically speaking, no better or worse than a “drive on the right side of the road” convention. However, the decision to reserve some orientation as the canonical orientation for driving is far from arbitrary in that establishing such a convention improves both safety (doing so decreases the probability of head-on collisions) and efficiency (cars can drive faster without worrying about dodging oncoming traffic). In this paper, we investigate the extent to which these road rules – like the choice of a canonical heading orientation – can be learned in multi-agent driving environments in which agents are trained to drive to different destinations as quickly as possible without colliding with other agents. As visualized in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Emergent Road Rules In Multi-Agent Driving Environments"), our agents are initialized in random positions in different maps (either synthetically generated or scraped from real-world intersections from the nuScenes dataset (Caesar et al., [2019](#bib.bib2 "nuScenes: A multimodal dataset for autonomous driving"))) and tasked with reaching a randomly sampled feasible target destination. Intuitively, when agents have full access to the map and exact states of other agents, optimizing for traffic flow leads the agents to drive in qualitatively aggressive and un-humanlike ways. However, when perception is imperfect and noisy, we show in Section [5](#S5 "5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") that the agents begin to rely on constructs such as lanes, traffic lights, and safety distance to drive safely at high speeds. Notably, while prior work has primarily focused on building driving simulators with realistic sensors that mimic LiDARs and cameras (Dosovitskiy et al., [2017](#bib.bib39 "CARLA: an open urban driving simulator"); Manivasagam et al., [2020](#bib.bib31 "LiDARsim: realistic lidar simulation by leveraging the real world"); Yang et al., [2020](#bib.bib35 "SurfelGAN: synthesizing realistic sensor data for autonomous driving"); Bewley et al., [2018](#bib.bib34 "Learning to drive from simulation without real world labels")), we focus on the high-level design choices for the simulator – such as the definition of reward and perception noise – that determine if agents trained in the simulator exhibit realistic behaviors. We hope that the lessons in state space, action space, and reward design gleaned from this paper will transfer to simulators in which the prototypes for perception and interaction used in this paper are replaced with more sophisticated sensor simulation. Code and Documentation for all experiments presented in this paper can be found in our Project Page111<http://fidler-lab.github.io/social-driving>. Our main contributions are as follows: * We define a multi-agent driving environment in which agents equipped with noisy LiDAR sensors are rewarded for reaching a given destination as quickly as possible without colliding with other agents and show that agents trained in this environment learn road rules that mimic road rules common in human driving systems. * We analyze what choices in the definition of the MDP lead to the emergence of these road rules and find that the most important factors are perception noise and the spatial density of agents in the driving environment. * We release a suite of 2D driving environments222<https://github.com/fidler-lab/social-driving> with the intention of stimulating interest within the MARL community to solve fundamental self-driving problems. 2 Related Works ---------------- Reinforcement Learning Deep Reinforcement Learning (DeepRL) has become an popular framework that has been successfully used to solve Atari (Mnih et al., [2013](#bib.bib12 "Playing atari with deep reinforcement learning")), Strategy Games (Peng et al., [2017](#bib.bib13 "Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games"); OpenAI, [2018](#bib.bib16 "OpenAI five")), and Traffic Control (Wu et al., [2017](#bib.bib15 "Emergent behaviors in mixed-autonomy traffic"); Belletti et al., [2018](#bib.bib14 "Expert level control of ramp metering based on multi-task deep reinforcement learning")). Vanilla Policy Gradient (Sutton et al., [2000](#bib.bib4 "Policy gradient methods for reinforcement learning with function approximation")) is an algorithm that optimizes an agent’s policy by using monte-carlo estimates of the expected return. Proximal Policy Optimization (Schulman et al., [2017](#bib.bib3 "Proximal policy optimization algorithms")) – which we use in this work – is an on-policy policy gradient algorithm that alternately samples from the environment and optimizes the policy using stochastic gradient descent. PPO stabilizes the Actor’s training by limiting the step size of the policy update using a clipped surrogate objective function. Multi-Agent Reinforcement Learning The central difficulties of Multi-Agent RL (MARL) include environment non-stationarity (Hernandez-Leal et al., [2019](#bib.bib20 "A survey and critique of multiagent deep reinforcement learning"), [2017](#bib.bib23 "A survey of learning in multiagent environments: dealing with non-stationarity"); Busoniu et al., [2008](#bib.bib22 "A comprehensive survey of multiagent reinforcement learning"); Shoham et al., [2007](#bib.bib21 "If multi-agent learning is the answer, what is the question?")), credit assignment (Agogino and Tumer, [2004](#bib.bib24 "Unifying temporal and structural credit assignment problems"); Wolpert and Tumer, [2002](#bib.bib25 "Optimal payoff functions for members of collectives")), and the curse of dimensionality (Busoniu et al., [2008](#bib.bib22 "A comprehensive survey of multiagent reinforcement learning"); Shoham et al., [2007](#bib.bib21 "If multi-agent learning is the answer, what is the question?")). Recent works (Son et al., [2019](#bib.bib28 "Qtran: learning to factorize with transformation for cooperative multi-agent reinforcement learning"); Rashid et al., [2018](#bib.bib27 "QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning")) have attempted to solve these issues in a centralized training decentralized execution framework by factorizing the joint action-value Q function into individual Q functions for each agent. Alternatively, MADDPG (Lowe et al., [2017](#bib.bib46 "Multi-agent actor-critic for mixed cooperative-competitive environments")) and PPO with Centralized Critic (Baker et al., [2019](#bib.bib30 "Emergent tool use from multi-agent autocurricula")) have also shown promising results in dealing with MARL Problems using policy gradients. Emergent Behavior Emergence of behavior that appears human-like in MARL (Leibo et al., [2019](#bib.bib33 "Autocurricula and the emergence of innovation from social interaction: A manifesto for multi-agent intelligence research")) has been studied extensively for problems like effective tool usage (Baker et al., [2019](#bib.bib30 "Emergent tool use from multi-agent autocurricula")), ball passing and interception in 3D soccer environments (Liu et al., [2019](#bib.bib47 "Emergent coordination through competition")), capture the flag (Jaderberg et al., [2019](#bib.bib48 "Human-level performance in 3d multiplayer games with population-based reinforcement learning")), hide and seek (Chen et al., [2019](#bib.bib38 "Visual hide and seek"); Baker et al., [2019](#bib.bib30 "Emergent tool use from multi-agent autocurricula")), communication (Foerster et al., [2016](#bib.bib52 "Learning to communicate with deep multi-agent reinforcement learning"); Sukhbaatar et al., [2016](#bib.bib51 "Learning multiagent communication with backpropagation")), and role assignment (Wang et al., [2020](#bib.bib36 "ROMA: multi-agent reinforcement learning with emergent roles")). For autonomous driving and traffic control (Sykora et al., [2020](#bib.bib56 "Multi-agent routing value iteration network")), emergent behavior has primarily been studied in the context of imitation learning (Bojarski et al., [2016](#bib.bib42 "End to end learning for self-driving cars"); Zeng et al., [2019](#bib.bib43 "End-to-end interpretable neural motion planner"); Bansal et al., [2018](#bib.bib40 "ChauffeurNet: learning to drive by imitating the best and synthesizing the worst"); Philion and Fidler, [2020](#bib.bib41 "Lift, splat, shoot: encoding images from arbitrary camera rigs by implicitly unprojecting to 3d"); Bhattacharyya et al., [2019](#bib.bib55 "Simulating emergent properties of human driving behavior using multi-agent reward augmented imitation learning")). In contrast to work that studies emergent behavior in mixed-traffic autonomy (Wu et al., [2017](#bib.bib45 "Emergent behaviors in mixed-autonomy traffic")) and traffic signal control (Stevens and Yeh, [2016](#bib.bib49 "Reinforcement learning for traffic optimization")), we consider a fully autonomous driving problem in a decentralized execution framework and show the emergence of standard traffic rules that are present in transportation infrastructure. 3 Problem Setting ------------------ We frame the task of driving as a discrete time Multi-Agent Dec-POMDP (Oliehoek et al., [2016](#bib.bib44 "A concise introduction to decentralized pomdps")). Formally, a Dec-POMDP is a tuple G=⟨S,A,P,r,ρ0,O,n,γ,T⟩. S denotes the state space of the environment, A denotes the joint action space of the n agents s.t. ⋃ni=1ai∈A, P is the state transition probability P:S×A×S↦R+, r is a bounded reward function r:S×a↦R, ρ0 is the initial state distribution, O is the joint observation space of the n agents s.t. ⋃ni=1oi∈O, γ∈(0,1] is the discount factor, and T is the time horizon. We parameterize the policy πθ:o×a↦R+ of the agents using a neural network with parameters θ. In all our experiments, the agents share a common policy network. Let the expected return for the ith agent be ηi(πθ)=Eτ[∑T−1t=0γtri,t(st,ai,t)], where τ=(s0,ai,0,…,sT−1,ai,T−1) is the trajectory of the agent, s0∼ρ0, ai,t∼πθ(ai,t|oi,t), and st+1∼P(st+1|st,⋃ni=1ai,t). Our objective is to find the optimal policy which maximizes the utilitarian objective ∑ηi. Reward We use high-level generic rewards and avoid any extensive reward engineering. The agents receive a reward of +1 if they successfully reach their given destination and -1 if they collide with any other agent or go off the road. In the event of a collision, the simulation for that agent halts. In an inter-agent collision, we penalize both agents equally without attempting to determine which agent was responsible for the collision. We additionally regularize the actions of the agents to encourage smooth actions and add a normalized penalty proportional to the longitudinal distance of the agent from the destination to encourage speed. We ensure that the un-discounted sum of each component of the reward for an agent over the entire trajectory is bounded 0≤r≤1. We combine all the components to model the reward ri,t received by agent i at timestep t as follows: | | | | | | --- | --- | --- | --- | | | ri,t(st,ai,t)= | I[Reached the Destination at timestep t] | | | | | −I[Collided for the first time with an % obstacle/agent at timesetep t] | | | | | −1T⋅∥∥∥ai,t−ai,(t−1)(ai)max∥∥∥2−1T⋅Distance to goal from % current positionDistance to goal from initial position | | | | where (ai)max | is the maximum magnitude of action that agent i can take | | Map and Goal Representation We use multiple environments: four-way intersection, highway tracks, and real-world road patches from nuScenes (Caesar et al., [2019](#bib.bib2 "nuScenes: A multimodal dataset for autonomous driving")), to train the agents. The initial state distribution ρ0 is defined by the drivable area in the base environment. The agents “sense” a traffic signal if they are near a traffic signal and facing the traffic signal. These signals are represented as discrete values – 0, 0.5, 1.0 for the 3 signals and 0.75 for no signal available – in the observation space. In all but our communication experiments, agents have the ability to communicate exclusively through the motion that other agents observe. In our communication experiments, we open a discrete communication channel designed to mimic turn signals and discuss the direct impact on agent behavior. Additionally, to mimic a satnav dashboard, the agents observe the distance from their goal, the angular deviation of the goal from the current heading, and the current speed. LiDAR observations We simulate a LiDAR sensor for each agent by calculating the distance to the closest object – dynamic or static – along each of n equi-angular rays. We restrict the range of the LiDAR to be 50m. Human eyes themselves are imperfect sensors and are easily thwarted by weather, glare, or visual distractions; in our experiments, we study the importance of this “visual" sensor by introducing noise in the sensor. We introduce x% lidar noise by random uniformly dropping (assigning a value of 0) x% of the rays at every timestep (Manivasagam et al., [2020](#bib.bib31 "LiDARsim: realistic lidar simulation by leveraging the real world")). To give agents the capacity to infer the velocity and acceleration of nearby vehicles, we concatenate the LiDAR observations from T=5 past timesteps. 4 Policy Optimization ---------------------- ### 4.1 Policy Parameterization In our experiments we consider the following two parameterizations for our policy network(s): 1. Fixed Track Model: We optimize policies that output a Multinomial Distribution over a fixed set of discretized acceleration values. This distribution is defined by πϕ(a|o), where πϕ is our policy network, a is the acceleration, and o is the observation. This acceleration is used to drive the vehicle along a fixed trajectory from the source to target destination. This model trains efficiently but precludes the emergence of lanes. 2. Spline Model: To train agents that are capable of discovering lanes, we use a two-stage formulation inspired by Zeng et al. ([2019](#bib.bib43 "End-to-end interpretable neural motion planner")) in which trajetories shapes are represented by clothoids and time-dependence is represented by a velocity profile. Our overall policy is factored into two “subpolicies" – a spline subpolicy and an acceleration subpolicy. The spline subpolicy is tasked with predicting the spline along which the vehicle is supposed to be driven. This subpolicy conditions on an initial local observation of the environment and predicts the spline (see Section [5](#S5 "5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")). We use a Centripetal Catmull Rom Spline (Catmull and Rom, [1974](#bib.bib9 "A class of local interpolating splines")) to parameterize this spatial path. The acceleration subpolicy follows the same parameterization from Fixed Track Model, and controls the agent’s motion along this spline. We formalize the training algorithm for this bilevel problem in Section [4.4](#S4.SS4 "4.4 Bilevel Optimization for Joint Training of Spline/Acceleration Subpolicies ‣ 4 Policy Optimization ‣ Emergent Road Rules In Multi-Agent Driving Environments"). Note that the fixed track model is a special case of the spline model parameterization, where the spline is hard-coded into the environment. These splines can be extracted from lane graphs such as those found in the HD maps provided by the nuScenes dataset (Caesar et al., [2019](#bib.bib2 "nuScenes: A multimodal dataset for autonomous driving")). ### 4.2 Proximal Policy Optimization using Centralized Critic We consider a Centralized Training with Decentralized Execution approach in our experiments. During training, the critic has access to all the agents’ observations while the actors only see the local observations. We use Proximal Policy Optimization (PPO) (Schulman et al., [2017](#bib.bib3 "Proximal policy optimization algorithms")) and Generalized Advantage Estimation (GAE) (Schulman et al., [2015](#bib.bib5 "High-dimensional continuous control using generalized advantage estimation")) to train our agents. Let Vϕ denote the value function. To train our agent, we optimize the following objective: | | | | | --- | --- | --- | | | L1(ϕ)=E[min(~r(ϕ)^A,clip(~r(ϕ),1−ϵ,1+ϵ)^A)−c1(Vϕ(s,a)−Vtarget)−c2H(s,πϕ)] | | where ~r(ϕ)=πϕ(a|o)πϕold(a|o), ^A is the Estimated Advantage, H(⋅) measures entropy, {ϵ,c1,c2} are hyperparameters, and Vtarget is the value estimate recorded during simulation. Training is performed using a custom adaptation of SpinningUp (Achiam, [2018](#bib.bib7 "Spinning Up in Deep Reinforcement Learning")) for MARL and Horovod (Sergeev and Balso, [2018](#bib.bib6 "Horovod: fast and easy distributed deep learning in TensorFlow")). The agents share a common policy network in all the reported experiments. In our environments, the number of agents present can vary over time, as vehicles reach their destinations and new agents spawn. To enforce permutation invariance across the dynamic pool of agents, the centralized critic takes as input the mean of the latent vector obtained from all the observations. Result: Trained Subpolicies πθ and πϕ πθ← Spline Subpolicy, πϕ← Acceleration Subpolicy, Vϕ← Value Function for Acceleration Control; for *i = 1 … N* do         /\* Given πϕ optimize πθ \*/         for *k = 1 … K1* do                Collect set of Partial Trajectories D1,k using as∼πθ(os) and aa←argmaxπϕ(a|oa);                Compute and store the normalized rewards ¯¯¯¯R;                        end for        Optimize the parameters θ using the objective L2(θ) and stored trajectories D1;         /\* Given πθ optimize πϕ and vfϕ \*/         for *k = 1 … K2* do                Collect set of Partial Trajectories D2,k using as←argmaxπθ(a|os) and aa∼πϕ(oa);                Compute and store the advantage estimates ^A using GAE;                        end for        Optimize the parameters ϕ using the objective L2(ϕ) and stored trajectories D2;         end for Algorithm 1 Alternating Optimization for Spline and Acceleration Control ### 4.3 Single-Step Proximal Policy Optimization In a single-step MDP, the expected return modelled by the critic is equal to the reward from the environment as there are no future timesteps. Hence, optimizing the critic is unnecessary in this context. Let ¯¯¯¯Rt denote the normalized reward. The objective function defined in Sec [4.2](#S4.SS2 "4.2 Proximal Policy Optimization using Centralized Critic ‣ 4 Policy Optimization ‣ Emergent Road Rules In Multi-Agent Driving Environments") reduces to: | | | | | | | --- | --- | --- | --- | --- | | | L2(θ) | =E[min(~r(θ)¯¯¯¯R,clip(~r(θ),1−ϵ,1+ϵ)¯¯¯¯R)−c2H(s,πθ)] | | (1) | ### 4.4 Bilevel Optimization for Joint Training of Spline/Acceleration Subpolicies In this section, we present the algorithm we use to jointly train two RL subpolicies where one subpolicy operates in a single step and the other operates over a time horizon T≥1. The subpolicies operate oblivious of each other, and cannot interact directly. The reward for the spline subpolicy is the undiscounted sum of the rewards received by the acceleration subpolicy over the time horizon. Pseudocode is provided in Algorithm [1](#algorithm1 "Algorithm 1 ‣ 4.2 Proximal Policy Optimization using Centralized Critic ‣ 4 Policy Optimization ‣ Emergent Road Rules In Multi-Agent Driving Environments"). The algorithm runs for a total of N iterations. For each iteration, we collect K1 and K2 samples from the environment to train the spline and acceleration subpolicy respectively. We denote actions and observations using (as,os) and (aa,oa) (not to be confused with the notation used in Section [3](#S3 "3 Problem Setting ‣ Emergent Road Rules In Multi-Agent Driving Environments")) for the spline and acceleration subpolicy respectively. 5 Emergent Social Driving Behavior ----------------------------------- We describe the various social driving behaviors that emerge from our experiments and analyze them quantitatively. Experiments that do not require an explicit emergence of lanes – [5.1](#S5.SS1 "5.1 Stopping at a Traffic Signal ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"), [5.3](#S5.SS3 "5.3 Right of Way ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"), [5.4](#S5.SS4 "5.4 Communication ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"), [5.7](#S5.SS7 "5.7 Slowing down near a Crosswalk ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") – use the fixed track model. We use the Spline Model for modeling lane emergence in Experiments [5.2](#S5.SS2 "5.2 Emergence of Lanes ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") and [5.5](#S5.SS5 "5.5 Fast Lanes on a Highway ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"). The results in Section [5.6](#S5.SS6 "5.6 Minimum Distance Between Vehicles ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") are a consequence of the increased number of training agents, and are observed using either agent type. Qualitative rollouts are provided in our project page. ![](https://media.arxiv-vanity.com/render-output/7815516/x2.png) Figure 2: Traffic Light Usage Actions taken by the agents with varying amount of perception noise and traffic signal (represented by color coding the box plots). A strong correlation exists between the acceleration, traffic signal and distance from the intersection. As the agent approaches the intersection, the effect of red signal on the actions is more prominent (characterized by the reduced variance). However, the variance for green signal increasing, since agents need to marginally slow down upon detecting an agent in front of them. With more LiDAR noise, detecting the location of the intersection becomes diffcult so the agents prematurely slow down far from the intersection. Agents with better visibility can potentially safely cross the intersection on a red light, hence the mean acceleration near the intersection goes down with increasing lidar noise. Finally, with increasing number of agents, the variability of the actions increase due to presence of leading vehicles on the same path. ![](https://media.arxiv-vanity.com/render-output/7815516/x3.png) Figure 3: Traffic light usage Spatial 2D histogram of a synthetic intersection showing the fraction of agents that see a green signal. Once agents have entered the intersection we consider the signal they saw just before entering it. A darker shade of green in the intersection shows that fewer agents violated the traffic signal. In a single agent environment, there is no need to follow the signals. Agents trained on an environment with higher spatial density (top row) violate the signal less frequently. Agents also obey the signals more with increased perception noise (bottom row). ![](https://media.arxiv-vanity.com/render-output/7815516/x4.png) Figure 4: Lanes emerge with more perception noise When perception noise is increased agents follow lanes more consistently (higher peaks for 25% and 50% LiDAR models). However, after a certain threshold imperfect perception leads a poor convergence as can be seen for the 75% LiDAR model. ### 5.1 Stopping at a Traffic Signal Traffic Signals are used to impose an ordering on traffic flow direction in busy 4-way intersections. We simulate a 4-way intersection, where agents need to reach the opposite road pocket in minimum time. We constrain the agents to move along a straight line path. We employ the fixed track model and agents learn to control their accelerations to reach their destination. We study the agent behaviors by varying the number of training agents and their perception noise (Figure [2](#S5.F2 "Figure 2 ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") & [3](#S5.F3 "Figure 3 ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")). Note that the agents merely observe a ternary value representing the traffic light’s state, not color. To make the plots in this section, we visually inspect rollouts for each converged policy to find a permutation of the ternary states that align with human red/yellow/green traffic light conventions. ![](https://media.arxiv-vanity.com/render-output/7815516/x5.png) Figure 5: Lanes emerge with more agents With a low spatial agent density, the subpolicies converge to a roundabout motion. On increasing the spatial density, the agents learn to jointly obey the traffic signals and lanes. Agents trained with higher spatial density follow two lanes exclusively on one side of the road. ![](https://media.arxiv-vanity.com/render-output/7815516/x6.png) Figure 6: Spatial Positions on Intersection Environment Agents trained in an 8 agent environment cross the intersection in two discrete lanes on the right hand side of the road. The chosen lane depends on the starting positions of the agents. Agents starting towards the left tend to take the inner lane to allow a faster traffic flow. ### 5.2 Emergence of Lanes To analyze lane emergence, we relax the constraint on the fixed agent paths in the setup of Section [5.1](#S5.SS1 "5.1 Stopping at a Traffic Signal ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"). We use the Bilevel PPO Algorithm (Section [4.4](#S4.SS4 "4.4 Bilevel Optimization for Joint Training of Spline/Acceleration Subpolicies ‣ 4 Policy Optimization ‣ Emergent Road Rules In Multi-Agent Driving Environments")) to train the two subpolicies. The spline subpolicy predicts a deviation from a path along the roads’ central axis connecting the start position to the destination, similar to the GPS navigation maps used by human drivers. The acceleration subpolicy uses the same formulation as Section [5.1](#S5.SS1 "5.1 Stopping at a Traffic Signal ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"). To empirically analyze the emergence of lanes, we plot the “Normalized Lane Position" of the agents over time. “Normalized Lane Position" is the directional deviation of agent from the road axis. We consider the right side of the road (in the ego frame) to have a positive “Normalized Lane Position". Figures [4](#S5.F4 "Figure 4 ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") & [5](#S5.F5 "Figure 5 ‣ 5.1 Stopping at a Traffic Signal ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") show the variation with lidar noise and number of training agents respectively. Figure [6](#S5.F6 "Figure 6 ‣ 5.1 Stopping at a Traffic Signal ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") shows the spatial positions of the agents for the 8 agent perfect perception environment. ### 5.3 Right of Way For tasks that can be performed simultaneously and take approximately equal time for completion, First In First Out (FIFO) scheduling strategy minimizes the average waiting time. In the context of driving through an intersection where each new agent symbolizes a new task, the agent that arrives first at the intersection should also be able to leave the intersection first. In other words, given any two vehicles, the vehicle arriving at the intersection first has the “right of way" over the other vehicle. Let the time at which agent i∈[n] arrives at the intersection be (ta)i and leaves the intersection be (td)i. If ∃j∈[n]∖{i}, such that (ta)i<(ta)j and (td)i>(td)j, we say that agent i doesn’t respect j’s right of way. We evaluate this metric on a model trained on a nuScenes intersection (Figure [9](#S5.F9 "Figure 9 ‣ 5.3 Right of Way ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")). We observe that, at convergence, the agents follow this right of way 85.25±8.9% of time (Figure [9](#S5.F9 "Figure 9 ‣ 5.3 Right of Way ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")). ![](https://media.arxiv-vanity.com/render-output/7815516/x7.png) Figure 7: Right of Way Agents are increasing able to successfully reach their destinations (denoted by the decreasing red line) with more training episodes. The agents also increasing obey the right of way (denoted by the increasing green line). The error bars are constructed using μ±σ2. ![](https://media.arxiv-vanity.com/render-output/7815516/x8.png) Figure 8: nuScenes Intersection used for Right of Way (see Section [5.3](#S5.SS3 "5.3 Right of Way ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")) and Communication (see Section [5.4](#S5.SS4 "5.4 Communication ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")) Training and Evaluation. The red and green dots mark the location of the traffic signals and their current ternary state. ![](https://media.arxiv-vanity.com/render-output/7815516/x9.png) Figure 9: Emergent Communication with Perception Noise Speaker Consistency & Pearson coefficient between the agent’s heading and its sent message increase with increase in Perception Noise. Since all agents have faulty sensors, the agents aid each other to navigate by propagating their heading to their trailing car. ### 5.4 Communication One way to safely traverse an intersection is to signal one’s intention to nearby vehicles. We analyze the impact of perception noise on emergent communication at an intersection (Figure [9](#S5.F9 "Figure 9 ‣ 5.3 Right of Way ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")). In particular, we measure the Speaker Consistency (SC), proposed in Jaques et al. ([2019](#bib.bib54 "Social influence as intrinsic motivation for multi-agent deep reinforcement learning")). SC can be considered as the mutual information between an agent’s message and its future action. We report the mutual information and the Pearson coefficient (Freedman et al., [2007](#bib.bib37 "Statistics (international student edition)")) between the agent’s heading and its sent message. We limit the communication channel to one bit for simplicity. Each car only receives a signal from the car in the front within −30∘ and 30∘. Figure [9](#S5.F9 "Figure 9 ‣ 5.3 Right of Way ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") shows that agents rely more heavily on communication at intersections when perception becomes less reliable. ### 5.5 Fast Lanes on a Highway Highways have dedicated fast lanes to allow a smooth flow of traffic. In this experiment, we empirically show that autonomous vehicles exhibit a similar behavior of forming fast lanes while moving on a highway when trained to maximize traffic flow. We consider a straight road with a uni-directional flow of traffic. Agents are spawned at random positions along the road’s axis. ![](https://media.arxiv-vanity.com/render-output/7815516/x10.png) Figure 10: Spatial Positioning in Highway Environment Agents start from one of the positions marked by ⋆, and need to reach the end position represented by the solid black line. The agents with a higher acceleration rating move on the right hand side while the lower rated ones drive on the left hand side. Every agent is assigned a scalar value called “Acceleration Rating," which scales the agent’s acceleration and velocity limits. Thus, a higher acceleration rating implies a faster car. The spline subpolicy predicts an optimal spline considering this acceleration rating. Even though the agents can decide to move straight by design, it is clearly not an optimal choice as slower cars in front will hinder smooth traffic flow. Figure [10](#S5.F10 "Figure 10 ‣ 5.5 Fast Lanes on a Highway ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") shows that agents are segregated into different lanes based on their Acceleration Rating. Figure [11](#S5.F11 "Figure 11 ‣ 5.5 Fast Lanes on a Highway ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") visualizes this behavior over time. ![](https://media.arxiv-vanity.com/render-output/7815516/x11.png) Figure 11: Fast Lane Emergence Visualization of rollouts from a 10-agent highway environment. In y-axis, we show the agent’s position relative to the axis of the road normalized by road width. The agents reach a consensus where the faster agents end up on the right-hand side lane. This pattern ensures that slower vehicles do not obstruct faster vehicles once the traffic flow has reached a steady state. ### 5.6 Minimum Distance Between Vehicles ![](https://media.arxiv-vanity.com/render-output/7815516/x12.png) Figure 12: Safety Distance Maintained in Nuscenes Environment Agents need to maintain a minimum of the Speed-Matching Distance to be able to safely stop. 98.45% of the agents in this plot learn to respect this distance threshold. In this task, we evaluate the extent to which the agents learn to respect a minimum safety distance between agents while driving. When agents are too close, they are at a greater risk of colliding; when agents are too far, they are not as efficient traveling a→b. To derive a human-like “safety distance”, we assume agents can change their velocity according to v2=u2+2ad, where v and u are the final and initial velocities respectively, a is acceleration, and d is the distance for which the acceleration remains constant. Hence, for an agent to stop entirely from a state with velocity v0, it needs at least a distance of v202amax in front of it, where amax is the maximum possible deceleration of the agent. Our agents perceive the environment through LiDAR; thus, agents can estimate nearby agents’ velocity and acceleration. We define the safe distance as the distance needed for a trailing agent to have a zero velocity in the leading agent’s frame. We assume that the leading agent travels with a constant velocity, and as such, the safe distance is defined by Δs22amax, where Δs is the relative velocity. Any car having a distance greater than this can safely slow down. Agents trained on nuScenes Intersections obey this safety distance around 98.45% time (Figure [12](#S5.F12 "Figure 12 ‣ 5.6 Minimum Distance Between Vehicles ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments")). ![](https://media.arxiv-vanity.com/render-output/7815516/x13.png) Figure 13: Safety Distance from Pedestrians We observe that most of the agents maintain a distance greater than the recommended speed-matching distance from the pedestrians. The shaded region in the KDE plots indicate the 95% confidence interval and the dotted line is the sample mean ### 5.7 Slowing down near a Crosswalk In this task, we evaluate if agents can detect pedestrians and slow down in their presence. We augment the environment setup of Sec. [5.5](#S5.SS5 "5.5 Fast Lanes on a Highway ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"), to include a crosswalk where at most 10 pedestrians are spawned at the start of every rollout. The pedestrians cross the road with a constant velocity. If any agent collides with a pedestrian, they get a collision reward of -1, and the simulation for that agent stops. The KDE plots in Figure [13](#S5.F13 "Figure 13 ‣ 5.6 Minimum Distance Between Vehicles ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") show that the agents indeed detect the pedestrians, and most of them maintain a distance greater than 6m. To determine if the agents can safely stop and prevent collision with the pedestrians, we calculate a safe stopping distance of s22amax, where s is the velocity of the agent. In the scatter plot, we observe that most agents adhere to this minimum distance and drive at a distance, which lies in the safe driving region. 6 Discussion ------------- ![](https://media.arxiv-vanity.com/render-output/7815516/x14.png) Figure 14: Safety Distance for Humans vs. Safety Distance for RL Agents The agents trained in our MDP (right) tend to violate the safety distance slightly more than human drivers (left) (98.45% vs. 99.56%), but in both cases a safety distance is observed the vast majority of the time (green triangular region). ### 6.1 Statistics of Human Driving In some cases, the same statistics accumulated over agent trajectories that we use in Section [5](#S5 "5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments") to quantitatively demonstrate emergence can also be accumulated over the human driving trajectories labeled in the nuScenes dataset. In Figure [14](#S6.F14 "Figure 14 ‣ 6 Discussion ‣ Emergent Road Rules In Multi-Agent Driving Environments"), we visualize safety distance statistics across nuScenes trajectories and safety distance statistics across RL agents trained on nuScenes intersections side-by-side. The nuScenes trainval split contains 64386 car instance labels, each with an associated trajectory. For each location along the trajectory, we calculate the safety distance as described in Section [5.6](#S5.SS6 "5.6 Minimum Distance Between Vehicles ‣ 5 Emergent Social Driving Behavior ‣ Emergent Road Rules In Multi-Agent Driving Environments"). The same computation is performed over RL agent trajectories. The agents trained in our MDP tend to violate the safety distance more than human drivers. However, in both cases, a safety distance is observed the vast majority of the time (green triangular region). ### 6.2 Future Work By parameterizing policies such that agents must follow the curve generated by the spline subpolicy at initialization (see Section [4.4](#S4.SS4 "4.4 Bilevel Optimization for Joint Training of Spline/Acceleration Subpolicies ‣ 4 Policy Optimization ‣ Emergent Road Rules In Multi-Agent Driving Environments")), we prevent lane change behavior from emerging. The use of a more expressive action space should address this limitation at the cost of training time. Additionally, the fact that our reward is primarily based on agents reaching destinations means that convergence is slow on maps that are orders of magnitude larger than the vehicles’ dimensions. One possible solution to training agents to navigate large maps would be to generate a curriculum of target destinations, as in Mirowski et al. ([2018](#bib.bib32 "Learning to navigate in cities without a map")). 7 Conclusion ------------- In this paper, we identify a lightweight multi-agent MDP that empirically captures the driving problem’s essential features. We equip our agents with a sparse LiDAR sensor and reward agents when they reach their assigned target destination as quickly as possible without colliding with other agents in the scene. We observe that agents in this setting rely on a shared notion of lanes and traffic lights to compensate for their noisy perception. We believe that dense multi-agent interaction and perception noise are critical ingredients in designing simulators that seek to instill human-like road rules in self-driving agents.
35d452de-56a6-42e9-86e7-eb7a127202f9
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored - that they are unfairly being held to higher standards than everyone else. I'm willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter. — Eliezer Yudkowsky, Frequentist Statistics are Frequently Subjective Imagine if, way back at the start of the scientific enterprise, someone had said, "What we really need is a control group for science - people who will behave exactly like scientists, doing experiments, publishing journals, and so on, but whose field of study is completely empty: one in which the null hypothesis is always true. "That way, we'll be able to gauge the effect of publication bias, experimental error, misuse of statistics, data fraud, and so on, which will help us understand how serious such problems are in the real scientific literature." Isn't that a great idea? By an accident of historical chance, we actually have exactly such a control group, namely parapsychologists: people who study extra-sensory perception, telepathy, precognition, and so on. There's no particular reason to think parapsychologists are doing anything other than what scientists would do; their experiments are similar to those of scientists, they use statistics in similar ways, and there's no reason to think they falsify data any more than any other group. Yet despite the fact that their null hypotheses are always true, parapsychologists get positive results. This is disturbing, and must lead us to wonder how many positive results in real science are actually wrong. The point of all this is not to mock parapsychology for the sake of it, but rather to emphasise that parapsychology is useful as a control group for science. Scientists should aim to improve their procedures to the point where, if the control group used these same procedures, they would get an acceptably low level of positive results. That this is not yet the case indicates the need for more stringent scientific procedures. Acknowledgements The idea for this mini-essay and many of its actual points were suggested by (or stolen from) Eliezer Yudkowsky's Frequentist Statistics are Frequently Subjective, though the idea might have originated with Michael Vassar. This was originally published at a different location on the web, but was moved here for bandwidth reasons at Eliezer's suggestion. Comments / criticisms A discussion on Hacker News contained one very astute criticism: that some things which may once have been considered part of parapsychology actually turned out to be real, though with perfectly sensible, physical causes. Still, I think this is unlikely for the more exotic subjects like telepathy, precognition, et cetera." The above is a chunk of text from a post created by a user on the website LessWrong. Considering the role of parapsychology as a control group for science, could you elaborate on the specific procedural improvements that scientists should adopt to minimize false positive results? Certainly, there are several procedural improvements that scientists can adopt to minimize false positive results. 1. **Pre-registration of studies:** One of the ways to reduce false positives is by pre-registering studies. This involves researchers specifying their hypothesis, methods, and analysis plan before they collect data. This prevents p-hacking or data dredging, where researchers manipulate their data or statistical analyses until they find a significant result. 2. **Replication:** Replication is a cornerstone of science. If a study's results can be replicated by other researchers, it increases confidence in the findings. Encouraging replication studies and valuing them in the scientific community can help to weed out false positives. 3. **Transparency and Open Data:** Encouraging transparency in data and methodology can allow other researchers to scrutinize the findings more closely. This can help identify any potential errors or biases that might have led to a false positive result. 4. **Improving Statistical Literacy:** A better understanding of statistics can help researchers design more robust studies and avoid common pitfalls that can lead to false positives. This includes understanding the limitations of p-values, the importance of confidence intervals, and the difference between statistical and practical significance. 5. **Using Bayesian Statistics:** Some argue that Bayesian statistics can be a useful tool in reducing false positives. Unlike frequentist statistics, which calculates the probability of the data given the null hypothesis, Bayesian statistics calculates the probability of the hypothesis given the data. This can provide a more intuitive and direct measure of evidence. 6. **Addressing Publication Bias:** There is a tendency for positive results to get published more than negative results. This can create a skewed perception of the evidence. Encouraging the publication of all results, regardless of whether they are positive, negative, or inconclusive, can help to provide a more balanced view of the evidence. By adopting these procedures, scientists can reduce the rate of false positives, improving the reliability and credibility of scientific research. How might the incorporation of machine learning and artificial intelligence in the scientific process aid in reducing false positives, particularly in the context of the procedural improvements you've mentioned? Machine learning (ML) and artificial intelligence (AI) can indeed play a significant role in reducing false positives in scientific research. Here's how they can contribute to the procedural improvements mentioned: 1. **Pre-registration of studies:** AI can be used to monitor and ensure adherence to pre-registered protocols. It can automatically check if the analysis carried out matches the pre-registered plan, thereby reducing the chances of p-hacking. 2. **Replication:** ML algorithms can be used to automate the replication of experiments by running the same analysis on different datasets. This can increase the speed and efficiency of replication studies. 3. **Transparency and Open Data:** AI can be used to manage and navigate large open data repositories. It can also automate the process of checking the data for inconsistencies or errors, which can contribute to false positives. 4. **Improving Statistical Literacy:** AI-powered educational tools can provide personalized learning experiences to improve researchers' statistical literacy. They can adapt to the learner's pace and provide targeted feedback to ensure a deep understanding of statistical concepts. 5. **Using Bayesian Statistics:** AI and ML are inherently suited to implementing Bayesian approaches. They can handle the computational complexity of Bayesian methods and can be used to build models that provide a direct measure of the evidence supporting different hypotheses. 6. **Addressing Publication Bias:** AI can be used to scan and analyze the vast amount of scientific literature to identify publication bias. For example, it can identify if positive results are being overrepresented in a particular field. Furthermore, AI and ML can also aid in hypothesis generation, experimental design, and data analysis, making the scientific process more efficient and reliable. They can sift through vast amounts of data to find patterns that humans might miss, and they can test a multitude of hypotheses at a speed that would be impossible for human researchers. However, it's important to note that while AI and ML can greatly aid scientific research, they are tools that need to be used judiciously and with a clear understanding of their limitations.
3a5c9482-2e61-4482-8482-1d550c72dfd5
trentmkelly/LessWrong-43k
LessWrong
Assessment of intelligence agency functionality is difficult yet important Summary: When it comes to observing intelligence agencies, it's hard to see the hardened parts and easy to observe the soft corrupt parts. This leads to a bias where very large numbers of people overestimate how prevalent the easily-observed soft and harmless parts are. This can sometimes even result in a dangerous and prevalent estimation, among people whose careers are much further ahead than yours, that the entire intelligence agency is harmless and irrelevant, when it actually isn't. Intelligence agencies are probably a mix of both less-functional, less-relevant parts, and also more-functional, more-relevant parts that have a disproportionately large influence over governments and policies; and it is a mistake to assume that intelligence agencies are homogenously composed of non-functional non-relevant parts that aren't worth paying any attention to, even if such a belief is a popular norm.   Why intelligence agencies are dangerous  There are a wide variety of situations where intelligence agencies suddenly becomes relevant, without warning. For example, most or all of the US Natsec establishment might suddenly and unanimously change its stance on Gain of Function research, such as if US-China relations or US-Russian relations once again hit a new 25-year low (which has actually been happening very frequently over the last few years). Either the leadership of an agency, or a powerful individual in an agency with authority to execute operations, or a corrupt clique, might personally make a judgement that the best way to expedite or restart GOF research is to target various people who are the most efficient or effective at opposing GOF research. This need not be anywhere near the most effective way to expedite or protect GOF research, it just needs to look like that, sufficiently for someone to sign off on that, or even for them to merely thing that it would look good to their boss. Competent or technologically advanced capabilities can obviously be mixed wi
7ce04471-b69b-4196-98bb-884173655775
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
253 Propositions Concerning Digital Minds and Society 2 Fixed Audio hello and welcome to session 253 in the aisafety.com reading group tonight we'll be discussing the second half of the article propositions concerning digital minds and society by nick bostrom and carl schulman or actually we would be doing that except this is recorded later and uh because the first version had some problems with the audio nick bostrom and carl german are both employed at the future of humanity institute in oxford and this is the first draft and we're looking at the second half of the article one of the things that i've discovered since the first part was produced is that this is in fact something that is was supposed to become a book and uh nick bostrom has changed priorities and so that could explain some of the disjointedness that i was pointing out in the previous video let's talk about ai empowered social organization and how coordination can change if we get more advanced ai one of the things we could see if it becomes possible to copy agents is a much larger degree of predictability in what would be the agent's motivation and how would they act in different situations um nick bustrom points out that uh non-indexical goals here could give uh could put a limit on the predictability if we have non-indexical goals are things referring to like i and now and here and obviously um if we have um if we copy an agent and try to uh then i will refer to a different agent and now and here will also be different so you're not going to get a 100 predictability but you might get um something for most indexable goals non-indexical goals and i would actually argue that we are unlikely to really want to have a lot of index code goals in our agents the things we want to put them to to try to optimize or improve are not likely to be directly recall related to a single agent if we are indeed able and to create multiple agents like this also even though you have predictability in motivation that doesn't actually buy you that much in real life because the the new copied ai will be situated in a different context so that means you won't get anything near full predictability from this one thing that will give a problem for copying ais will be that the ais if they have indexical uh or they might have uh motivations that are uh not necessarily perfectly aligned with that clan for instance selling the ip uh the secret data that they and everyone in the clan has uh is something they would have a desire to do and some kind of restrictions for that would probably be necessary um on the other hand uh uh legal sanctions from the rest of the world towards the ai will need to be modified possibly to uh either target the clan the creators or the goals uh there's some amusing about whether this is an adventurous advantage for instance in war um and um bostrom claims it will eventually become possible for a principal to have highly aligned agents and that is uh again assuming with this attacking uh understanding that the alignment problem is actually not just solvable but explicitly solved and that's one of my key disagreements with bathroom boston some of the coordination protocols that we are using right now could be undermined by ai and that's a really interesting thing and something that i think we should do more research in how things can go bad before we have full agi some of the boston doesn't give any concrete suggestions which is sad because i think it's really important some that i could think of was if captures were broken that would be something that could have substantial implications ernest davis has written about the info apocalypse the idea that we can have such a degree of misinformation that we'll just end up giving up on trying to learn what is the actual truth and that could be many more boston has an interesting analysis on this on levels of coordination and the two things that he cares about in particular are coordination at the high level which is states and a lower level which is corporations i think it's a very interesting analysis and i think it could be meaningfully extended both to have supernatural coordination like the united nations and a lower than uh cooperation something like the individual level um and bostrom has the uh uh it's the first time i've seen the the conclusion that if we get more coordination at one level we that could in fact result in lower coordination at the other levels normally when people talk about improved coordination they just assume that the there's a rising tide um we could see um uh criminal conspiracies would be uh that's kind of at the level of corporations if they become much more powerful then the state would have uh less power or we could uh see the state obtaining great power to lock people in in different ways um we will have less principal agent problems that could matter for organizations a lot um international organizations could be empowered by treaty buts we could see permanently stable autocratic regimes um bostrom suggests this would make one more likely i think that is a um possibility i would actually argue that it would go the other way but it's um but it's certainly difficult to say and bostrom also argues that what preventing organizations at the supernational level could become stronger um and uh and finally a an idea that we could get organizations that are super national uh and are robust to states um and uh so when i look at this just to see where the power of the different levels differ um then my thought is which level benefits the most from ai and i think my answer would this would strongly be at the state level i would expect that states have uh the assure and de facto power to obtain most of the benefits of this power even though right now it looks like corporations have more ai power in the sense that obviously open ai and deep mind seem to be dramatically more capable than government actors but i don't think i don't expect they would be able to leverage that into permanent positions of power i believe if they became very powerful the the state would be able to in practice just shut them down but of these four levels the level that i is most worried about is the level of individual humans because humans crucially can't benefit from better coordination so i could uh and i would in fact expect dramatically better ai enabled coordination to result in a shift of power away from individual humans to corporations criminals states supernational organization anyone else in fact treaty bots is something we covered a little last time the idea that we can uh write an uh a an ai to um enforce some kind of treaty and agree to follow the the advice or rulings of this uh treaty part and that could make substantially more complex deals available it might not solve all bargaining problems there is no precise description of which it won't solve but uh fair enough and there could be others other problems caused by bias and poor reasoning that won't be able to solve uh and some that it might be able to solve more advanced er it's difficult to like we need a stronger analysis to really see for sure what's going to happen um one of the things boston points out is that extortion would be something that would be unlikely to work against ais because they could make credible commitments to just ignore the extortion um i think in this the the dynamic that i expect will have the greatest impact is that ais are in fact able to not just merge on the level of having treaty bots that are able to um coordinate strongly and use that as combination mechanism but literally merge their utility functions or literally just merge completely i think this capability is potentially very very disruptive and likely to have a much larger effect on macro strategy second part is about satisfying multiple values so we have some resources and how do we distribute those in particular between ai and human bastrom gives the example of three policies one that allocates everything to humans one that allocates everything to super beneficiaries that is in fact in practice super intelligences that have more benefits from these resources and one that allocates one in a in 10 000 to humans and the rest to super beneficiaries and of these three it looks like uh option c is almost as good as um uh as both a is uh almost as good as a from a point of view of humanity and almost as good as b from the point of view of the super intelligence and for many other reasons so if we can take an action that increases the probability of uh options of policy c then that seems to be robustly got from a large number of reasons and my answer to this is a laconic if because i don't actually see any strong policies that would lead us towards this option if there were some they would be good but i'm not sure there are any someone and now i forgot unfortunately who that was pointed out that this in fact also holds if you substitute paper clips from super beneficiaries so something that turns 99.99 of the universe into paper clips might indeed be a very very positive thing potentially and part of this is of course once we have transformative ai we will be able to have a dramatically increased amount of resources and living standard in every possible measurable way population ethics like the total views in population ethics that the thing that matters is like how many uh fulfilling lives exist is something that is in fact mostly uh that could easily be very well satisfied in a way that only refers to faraway galaxies in the distant future meaning that if humans get for our idiosyncratic purposes all the nearby galaxies for the next couple of million years then that doesn't matter at all in the total view because the universe is just enough enough larger and this uh and what we should do is to promote cooperation and compromise over conflict in the development deployment and among ais and that's of course also something it sounds almost like an applause light i i would be very much interested in concrete actions you could take that increase the probability of this happening because i think it's one thing to just say this is the goal another very different thing is to figure out what policies will actually lead to this so what kind of distribution of resources could we or should we aim for at least give everybody a fantastically good life and at least give everyone like one in a trillion i think of all available resources um super beneficiaries or people who are not humans should most have um 10 and it this should be like widely distributed that seems also like a uh robustly good goal um should dead people have well possibly uh bostrom is arguing that an argument could be made so maybe devote one percent that would certainly be sufficient and perhaps also help humans non-human animals um bostrom further argues that we should put a lot of weight on reducing suffering especially severe suffering like obviously this is something that all uh i guess all moral frameworks agree on even strict utilitarians would agree that this is important but negative utilitarians would of course uh put a much higher premium on on this and i am unsure if boston means that we should put higher uh weight on this compared to just called calculation utilitarian calculation suggests or we should um just multiply the expected value basically um and another about who should have influence on the course of events uh and just saying that should be a broad range of values for instance with something like uh something like a moral parliament and finally super intelligence should be made and be allowed to play a major role in shaping the future i think that's a statement that a lot of people would strongly disagree with i think a moral case can be made for this if a practical case can be made for this is a very different question and uh i think it's far from obvious and i would like to see some uh some real engagement with this question which i think is actually really uh really funny i i don't think a long reflection necessarily would lead to anything like the minds in iron banks culture mental malleability persuasion and log in we could imagine uh persuasion happen in ways that don't require consent um for digital minds in particular this could be uh by just literally rewriting them um they have uh this is something that totally could happen but it's also something that the ais would be incentivized to try to avoid so it's not always something that's going to happen a lot um another thing that could happen which also would be really problematic would be um to take a copy of a digital mind and experiment it with it until you find a really good social persuasion or some other kind of attack um and this is scary and i am scared that this might be something that generalizes to also with some modification work on biological humans um boston is arguing that because we can repurpose the uh the hardware in a way you can't do with with humans that may uh uh make it attacking more attractive so potential benefits one of the benefits of having uh digital minds would be that uh some kinds of corruption that do empirically happen with humans could just be prevented outright by just uh saving uh the utility function and not uh allowing that to be changed in any way like corruption momentary temptations this kind of thing might not happen to ais at all we could have uh stable promises and commitments and we could um other benefits include duplicating profitable or otherwise valuable mines we could potentially if we have something like uploads we might be able to modify our minds to be more i think that's an inter i'm not a virtue ethicist but i think uh it'd be interesting to ask people who are actually virtue ethnicist what they feel about this i'm not sure they would endorse that to any particular degree and of course we could just make people happier in ways that we haven't really thought about or make people more able to withstand adversity and adapt to new needs or desires there are substantial pitfalls with this one of them is like just logging in too early and i think boston is right to to state that we we need to ensure that this doesn't happen i think unfortunately time is not on our side capitalism is a major force in in on the planet right now that pushes towards making early commitments in this sense um so for all the goods of uh capitalism i think it is strongly against us in in this sense what are other uh pitfalls well we might have predictive errors that we are unwilling to correct like uh in the sense that you know a religious person might be uh unwilling to seek out evidence that their religion is false uh we could see social pressure uh i think many kinds of social pressure would be uh would be potentially very strong and very dangerous we could see better criminal exploitation and manipulation and we could see some governments like coercing the the populace if there is some way to do that with digital minds uh to just instill loyalty i don't think uh like i don't actually think the the chinese government is making a secret that if they had the power to um just instill loyalty they would totally do that i think in particular the last one is more likely and more worrying compared to the others that the framing of pitfall is not really not the right one in the sense that a pitfall is something that you yeah you just avoid it and then you're out of it but i think the uh the desire for for governments to um harmonize society as the euphemism is uh is very very strong and it's a strong attractor that we are likely to fall into uh rather than a pitfall we can somehow avoid by default what would be the consequences for epistemology well bostrom has this really cute um metaphor of a prosthesis uh like a fake arm or something like that just inside our brain that allows us to uh have much more accurate models of the world and what are the consequences of our actions will be um so here is the time where i suggest a provocative act that i'm the one that i'm most optimistic about right now and that would be an agi that persuasively perhaps shows that uh building an unaligned agi is not in our interest doing so with by saying only true things and that's a kind of very very limited prosthesis that i think would uh have a the potential to be in fact a world-changing pivotal act let's go back to uh the uh the idea of having a an epistemic prosthesis that would change society in very very many ways in particular the assumption that people are rational would be uh much more accurate and politics would be improved in a great many ways and we would be able to uh uh like the political leadership would be changed in in many strong ways and probably very much for the for the better um dangerous knowledge is something that nick bostrom has written substantially about both like um info has its that are detrimental to the individual and something that is detrimental to society and that's of course something that we'll have more of in the future we may even be able to reach a high epistemic quality consensus about things like policy that is something that requires quite a lot of the uh improvement in epistemics we need ais to be like strongly um aligned with us to be sure that that the things we agree on with their help is honest and objective um and that's of course really really tough uh i don't think that in contrast to my pivotal act this is a the general requirement the ai is totally aligned uh with us in in any particular in any uh specific way whereas um just for whether unaligned agi is problematic is just one question so here we're talking about the full generality of the ai helping us with all questions um and that's of course something that is really valuable and also something that we in general would not trust how would we trust that the ai is giving us correct policy advice well some kind of verification would be necessary and we would be able lay humans would need to trust the people who are verifying um but this kind of social trust change is a technology that boston is pretty optimistic about and has worked in other circumstances the consequences of a high q estimated quality consensus is we would have less war um there is a uh um a thought amongst many rationalists that war is primarily caused by bad epidemics i'm not entirely sure this is mainstream but a lot of people do believe so we've got politics that are better in many ways we would have better treaties we may even have questions of ethics religion and politics resolved and bastrom is suggesting that we should cooperate behind a uh almost religious revolution whale of ignorance and because at this point everybody should commit to uh cooperating because they believe that they are right and i think that is very naive unfortunately uh elias kowski has an article called belief in belief which with some um deliberations on why we should in fact not expect this kind of consensus to happen another epistemic problem or is the potential for this information we might see powerful ai's that are able to persuade humans reliably and strongly against our intuitions our our wishes the question for me is whether this is symmetric or asymmetric because we might also have uh powerful ai's that are on our side more if not perfectly aligned than at least uh on our side in in the moment um and you would think that telling the truth is convincing someone of the truth is easier than convincing them of a falsehood scott alexander has an article called guided by the beauty of our weapons on this topic i think the jury is out uh i think in particular uh guarding against info hazards is problematic um we might even see something like basilisks like short messages by powerful ais that just dramatically change our values this would be really problematic if those were to exist and we don't actually know we could see neurological technologies that would also be potentially extremely problematic we could see this information campaigns that are very very powerful compared to what we have now one way around this would be to have a personal ai that gas against this information but then if it's something that after the fact clarifies subject or counter misinformation then that seems possible something that pre-screens which is required for avoiding info hazards in basilisks is a really really difficult task that requires a strong amount of trust but um that that in general we only give to governments uh posture is suggesting we should have norms and laws against this information deceitfulness we do in fact have those right now um do they work somehow i think they do have some effect but um in a consequence of uh powerful ai would be that things would be more extreme so i would expect this to either work really really well or work really really poorly simulating people is some way of in fact getting substantial information out of them and even a relatively poor simulation of someone would be able to tell if someone is like homosexual or whatever and that is in fact a very severe privacy violation and i think this should be thought of in the same way as we think about mic crime uh it's actually is the same thing that's happened it's just a matter of degree i have put the last two sections together stages of existing ai systems and recommendations regarding current practices and ai systems first are current ai systems conscious or not um and this is something that we covered to some extent in the previous session uh so i think there is the structure of this part could have been substantially better and bustrom has a at some length argument why we can't really be sure that current ais are not uh don't have a moral status and i think in fact based on the the arguments here and the lambda i've thought about this and i think i have updated substantially i do in fact believe that there is a significant probability that current generation of large language models are conscious to a degree that matters morally so what i care more about is what are the consequences for ai safety i think of course given that i became convinced that this that these models have more worth other people may come to the same uh conclusion and so in the medium term uh i think and a number of people are going to argue for some kind of machine rights writes this will probably have some kind of influence on ai safety depending of course on how strong the images becomes ai self-determination seems bad as at a first glance ai having a right to privacy also seems probably bad from an interoperability point of view we could see a slow capability increase if people become worried that they are making some kind of moral catastrophe when they are building these kind of ais in total the sum of all this in particular because it's going to muddy the water will i suspect result in a negative effect on ai safety what are the recommendations um nick bostrom argues uh that we should take action now or soon to at least to some extent be nice to current systems like similar to how we do for animals or and try to figure out uh the current ai sentience um make us some kind of um early pilot project uh preserve ais for the future is an important uh uh consideration because that would allow us to make some kind of reparations in a way we can't normally do with like we will also sometimes do with uh like humans if we put them in jail wrongfully try to identify strong suffering and avoid that getting some kind of organizational backing for this eventually getting government regulation i think all of this is good and worthwhile and laudable and i don't think we should do it because the opportunity cost of this is actually rather substantial the same people who are working on this should rather be working on ai safety and uh this is on several levels that this is going to detract uh researchers should work on uh on alignment research rather than looking into ai sentence activists should try and try slowing capability research rather than working for ai rights goodwill among ai labs is certainly a very finite resource and that's something that should be conserved and not spent on something like this one obvious thing is that i think boston should personally work on the alignment problem rather than working on this so that's uh of course tongue-in-cheek right he can decide what he wants to do but the the point is that this is just way less important in my mind and there is a real trade-off and i expect following these recommendations would substantially detract from ai safety uh there is one very cute thing from this the idea to have uh the rewards higher in deployment higher than expected from training uh i think that was a a really fun and uh interesting idea that i have never seen before but i would expect this would have some consequences for alignment and uh and even though it sounds like a really good uh good idea i think we would rather have the aip more predictable and more interpretable and unfortunately even that simple win that bustrum is suggesting i think we should focus on solving the alignment problem impact paths and motor advocacy boston suggests we should start now because even if we start now actual regulation with teeth will not happen anytime soon and that's of course something that i agree with and what um we might see um some leading ai actors perhaps doing something it's reasonable to expect that deepmind or openly i might write a paper or something like this we getting uh [Music] some real activation energy to use a term from physics is unlikely to happen unless we get a dramatic breakthrough but then we could in fact get some activation in some political will to do something um and when people start to realize okay the ais are in fact suffering if people start to realize that then they'll look around for existing work on how to mitigate the suffering and they will look into things like this paper and that's probably a lot better than just having nothing there and having the politicians come up with suggestions um in particular uh even if there is activation energy is likely to be short-lived compared to how long time it takes for to create a research field we could also see a uh a leading actor on this uh uh on air development becoming very powerful compared to uh to regulation and that's another advantage in starting early with regulations i have a hot take perhaps not very char charitable and that is if we are not solving the alignment problem really well we will get lock in with values and if we get values locked in and then it's a real matter to get good values as soon as possible so that's a very negative take on this kind of regulation and i think the the argument as such makes sense in that i expect that we'll get log in or if if not extinction uh soon and so it matters a lot to get good values but it would matter a lot more to not get login and bostrom is arguing that there might be an uh ai safety advantage to doing this um and i think there is some kind of ai safety advantage possible from this but there is another research field that is far more robustly likely to improve ai safety and that is working on ai safety directly rather than indirectly by going through this kind of regulations multi-level action is probably necessary in the sense that if we have the most ethical actors uh trying hard to avoid ai suffering but then they become uncompetitive and then the actors who don't care about ai suffering will just take over that's precisely the same dynamic we are seeing with ai safety whether it's this racing to the precipice dynamic and it's a difficult problem and in some sense it's the same resource being consumed because the same the um the ai development actors like deepmind and open ai that are the most ethical are probably also the most safety conscious so it'll be the same actors that are slowing down for both of these reasons government regulation that's probably premature in boston's view and we need to avoid antagonistic antagonizing the developers is public engagement desirable maybe we should certainly make it philosophical and interestingly thought provoking rather than very confrontational or hype-ish excuse me yeah sorry um and i think it's the correct thing to do but i also think it's very unrealistic in the sense that once we start grabbing headlines um a lot of people will crawl out of the woodwork to try to generate this kind of hype and i don't think it's possible for philosophers and thoughtful people to keep the debate in um uh on that terms in boston uh uh whether he actually agrees with this is unclear but he certainly agrees that this is something that should be uh considered really well that could easily be a lot of unintended consequences of trying to start some kind of problem engagement that is all for today thank you and see you next time
e952420e-e9ec-4d5e-9e0c-86f6b13fadbd
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Approval-directed agents *Note: This is the first post from **part two: basic intuitions** of [the sequence on iterated amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd). The second part of the sequence outlines the basic intuitions that motivate iterated amplification. I think that these intuitions may be more important than the scheme itself, but they are considerably more informal.* --- Research in AI is steadily progressing towards more flexible, powerful, and autonomous goal-directed behavior. This progress is likely to have significant economic and humanitarian benefits: it helps make automation faster, cheaper, and more effective, and it allows us to automate *deciding what to do*. Many researchers expect goal-directed machines to predominate, and so have considered the long-term implications of this kind of automation. Some of these implications are worrying: if sophisticated artificial agents pursue their own objectives and are as smart as we are, then the future may be shaped as much by their goals as by ours. Most thinking about “AI safety” has focused on the possibility of goal-directed machines, and asked how we might ensure that their goals are agreeable to humans. But there are other possibilities. In this post I will flesh out one alternative to goal-directed behavior. I think this idea is particularly important from the perspective of AI safety. **Approval-directed agents** ============================ Consider a human Hugh, and an agent Arthur who uses the following procedure to choose each action: Estimate the expected rating Hugh would give each action if he considered it at length. Take the action with the highest expected rating. I’ll call this “approval-directed” behavior throughout this post, in contrast with goal-directed behavior. In this context I’ll call Hugh an “overseer.” Arthur’s actions are rated more highly than those produced by any alternative procedure. That’s comforting, but it doesn’t mean that Arthur is optimal. An optimal agent may make decisions that have *consequences* Hugh would approve of, even if Hugh can’t anticipate those consequences himself. For example, if Arthur is playing chess he should make moves that are actually good—not moves that Hugh thinks are good. The quality of approval-directed decisions is limited by the *minimum* of Arthur’s ability and Hugh’s ability: Arthur makes a decision only if it looks good to both Arthur and Hugh. So why would Hugh be interested in this proposal, rather than doing things himself? * Hugh doesn’t actually rate actions, he just participates in a hypothetical rating process. So Hugh can oversee many agents like Arthur at once (and spend his actual time relaxing on the beach). In many cases, this is the whole point of automation. * Hugh can (hypothetically) think for a very long time about each decision—longer than would be practical or cost-effective if he had to actually make the decision himself. * Similarly, Hugh can think about Arthur’s decisions at a very low level of detail. For example, Hugh might rate a chess-playing AI’s choices about how to explore the game tree, rather than rating its final choice of moves. If Arthur is making billions of small decisions each second, then Hugh can think in depth about each of them, and the resulting system can be much smarter than Hugh. * Hugh can (hypothetically) use additional resources in order to make his rating: powerful computers, the benefit of hindsight, many assistants, very long time periods. * Hugh’s capabilities can be gradually escalated as needed, and one approval-directed system can be used to bootstrap to a more effective successor. For example, Arthur could advise Hugh on how to define a better overseer; Arthur could offer advice in real-time to help Hugh be a better overseer; or Arthur could directly act as an overseer for his more powerful successor. In most situations, I would expect approval-directed behavior to capture the benefits of goal-directed behavior, while being easier to define and more robust to errors. Advantages ========== Facilitate indirect normativity ------------------------------- Approval-direction is closely related to what Nick Bostrom calls “indirect normativity” — describing what is good indirectly, by describing how to tell what is good. I think this idea encompasses the most credible proposals for defining a powerful agent’s goals, but has some practical difficulties. Asking an overseer to evaluate *outcomes* directly requires defining an [**extremely** intelligent overseer](http://ordinaryideas.wordpress.com/2014/08/27/specifying-enlightened-judgment-precisely-reprise/), one who is equipped (at least in principle) to evaluate the entire future of the universe. This is probably impractical overkill for the kinds of agents we will be building in the near future, who *don’t* have to think about the entire future of the universe. Approval-directed behavior provides a more realistic alternative: start with simple approval-directed agents and simple overseers, and scale up the overseer and the agent in parallel. I expect the approval-directed dynamic to converge to the desired limit; this requires only that the simple overseers approve of scaling up to more powerful overseers, and that they are able to recognize appropriate improvements. Avoid lock-in ------------- Some approaches to AI require “locking in” design decisions. For example, if we build a goal-directed AI with the wrong goals then the AI might never correct the mistake on its own. For sufficiently sophisticated AI’s, such mistakes may be very expensive to fix. There are also more subtle forms of lock-in: an AI may also not be able to fix a bad choice of decision-theory, sufficiently bad priors, or a bad attitude towards infinity. It’s hard to know what other properties we might inadvertently lock-in. Approval-direction involves only extremely minimal commitments. If an approval-directed AI encounters an unforeseen situation, it will respond in the way that we most approve of. We don’t need to make a decision until the situation actually arises. Perhaps most importantly, an approval-directed agent can correct flaws in its own design, and will search for flaws if we want it to. It can change its own decision-making procedure, its own reasoning process, and its own overseer. Fail gracefully --------------- Approval-direction seems to “fail gracefully:” if we slightly mess up the specification, the approval-directed agent probably won’t be actively malicious. For example, suppose that Hugh was feeling extremely apathetic and so evaluated proposed actions only superficially. The resulting agent would not aggressively pursue a flawed realization of Hugh’s values; it would just behave lackadaisically. The mistake would be quickly noticed, unless Hugh deliberately approved of actions that concealed the mistake. This looks like an improvement over misspecifying goals, which leads to systems that are actively opposed to their users. Such systems are motivated to conceal possible problems and to behave maliciously. The same principle sometimes applies if you define the right overseer but the agent reasons incorrectly about it, if you misspecify the entire rating process, or if your system doesn’t work quite like you expect. Any of these mistakes could be serious for a goal-directed agent, but are probably handled gracefully by an approval-directed agent. Similarly, if Arthur is smarter than Hugh expects, the only problem is that Arthur won’t be able to use all of his intelligence to devise excellent plans. This is a serious problem, but it can be fixed by trial and error—rather than leading to surprising failure modes. Is it plausible? ================ I’ve already mentioned the practical demand for goal-directed behavior and why I think that approval-directed behavior satisfies that demand. There are other reasons to think that agents might be goal-directed. These are all variations on the same theme, so I apologize if my responses become repetitive. Internal decision-making ------------------------ We assumed that Arthur can predict what actions Hugh will rate highly. But in order to make these predictions, Arthur might use goal-directed behavior. For example, Arthur might perform a calculation because he believes it will help him predict what actions Hugh will rate highly. Our apparently approval-directed decision-maker may have goals after all, on the inside. Can we avoid this? I think so: Arthur’s internal decisions could also be approval-directed. Rather than performing a calculation because it will help make a good prediction, Arthur can perform that calculation because Hugh would rate this decision highly. If Hugh is coherent, then taking individual steps that Hugh rates highly leads to overall behavior that Hugh would approve of, just like taking individual steps that maximize X leads to behavior that maximizes X. In fact the result may be more desirable, from Hugh’s perspective, than maximizing Hugh’s approval. For example, Hugh might incorrectly rate some actions highly, because he doesn’t understand them. An agent maximizing Hugh’s approval might find those actions and take them. But if the agent was internally approval-directed, then it wouldn’t try to exploit errors in Hugh’s ratings. Actions that lead to reported approval but not real approval, don’t lead to approval for approved reasons **Turtles all the way down?** Approval-direction stops making sense for low-level decisions. A program moves data from register A into register B because that’s what the next instruction says, not because that’s what Hugh would approve of. After all, deciding whether Hugh would approve itself requires moving data from one register to another, and we would be left with an infinite regress. The same thing is true for goal-directed behavior. Low-level actions are taken because the programmer chose them. The programmer may have chosen them because she thought they would help the system achieve its goal, but the actions themselves are performed because that’s what’s in the code, not because of an explicit belief that they will lead to the goal. Similarly, actions might be performed because a simple heuristic suggests they will contribute to the goal — the heuristic was chosen or learned because it was expected to be useful for the goal, but the action is motivated by the heuristic. Taking the action doesn’t involve thinking about the heuristic, just following it. Similarly, an approval-directed agent might perform an action because it’s the next instruction in the program, or because it’s recommended by a simple heuristic. The program or heuristic might have been chosen to result in approved actions, but the taking the action doesn’t involve reasoning about approval. The aggregate effect of using and refining such heuristics is to effectively do what the user approves of. In many cases, perhaps a majority, the heuristics for goal-directed and approval-directed behavior will coincide. To answer “what do I want this function to do next?” I very often ask “what do I want the end result to be?” In these cases the difference is in how we think about the behavior of the overall system, and what invariants we try to maintain as we design it. **Relative difficulty?** Approval-directed subsystems might be harder to build than goal-directed subsystems. For example, there is much more data of the form “X leads to Y” than of the form “the user approves of X.” This is a typical AI problem, though, and can be approached using typical techniques. Approval-directed subsystems might also be easier to build, and I think this is the case today. For example, I recently wrote a function to decide which of two methods to use for the next step of an optimization. Right now it uses a simple heuristic with mediocre performance. But I could also have labeled some examples as “use method A” or “use method B,” and trained a model to predict what I would say. This model could then be used to decide when to use A, when to use B, and when to ask me for more training data. Reflective stability -------------------- Rational goal-directed behavior is reflectively stable: if you want X, you generally want to continue wanting X. Can approval-directed behavior have the same property? Approval-directed systems inherit reflective stability (or instability) from their overseers. Hugh can determine whether Arthur “wants” to remain approval-directed, by approving or disapproving of actions that would change Arthur’s decision-making process. Goal-directed agents want to be wiser and know more, though their goals are stable. Approval-directed agents also want to be wiser and know more, but they also want their overseers to be wiser and know more. The overseer is not stable, but the overseer’s values are. This is a feature, not a bug. Similarly, an agent composed of approval-directed subsystems overseen by Hugh is *not* the same as an approval-directed agent overseen by Hugh. For example, the composite may make decisions too subtle for Hugh to understand. Again, this is a feature, not a bug. Black box search ---------------- (Note: I no longer agree with the conclusions of this section. I now feel that approval-directed agents can probably be constructed out of powerful black-box search (or stochastic gradient descent); my main priority is now either handling this setting or else understanding exactly what the obstruction is. Ongoing work in this direction is collected at [ai-control](https://medium.com/ai-control), and will hopefully be published in a clear format by the end of 2016.) Some approaches to AI probably can’t yield approval-directed agents. For example, we could perform a search which treats possible agents as a black boxes and measures their behavior for signs of intelligence. Such a search could (eventually) find a human-level intelligence, but would give us very crude control over how that intelligence was applied. We could get some kind of goal-directed behavior by selecting for it, but selecting for approval-directed behavior would be difficult: 1. The paucity of data on approval is a huge problem in this setting. (Note: [semi-supervised reinforcement learning](https://medium.com/ai-control/semi-supervised-reinforcement-learning-cf7d5375197f#.27jxfbfni) is an approach to this problem.) 2. You have no control over the internal behavior of the agent, which you would expect to be optimized for pursuing a particular goal: maximizing whatever measure of “approval” that you used to guide your search. (Note: I no longer endorse this argument as written; [reward engineering](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450) is a response to the substance of this concern.) 3. Agents who maximized your reported approval in test cases need not do so in general, any more than humans are reliable reproductive-fitness-maximizers. (Note: [red teaming](https://medium.com/ai-control/red-teams-b5b6de33dc76#.zeiezwnmg) is an approach to this problem.) But [1] and especially [3] are also problems when designing a goal-directed agent with **agreeable** goals, or indeed any particular goals at all. Though approval-direction can’t deal with these problems, they aren’t new problems. Such a black-box search—with little insight into the internal structure of the agents—seems worrying no matter how we approach AI safety. Fortunately, it also seems unlikely (though not out of the question). A similar search is more likely to be used to produce internal components of a larger system (for example, you might train a neural network to identify objects, as a component of a system for navigating an unknown environment). This presents similar challenges, concerning robustness and unintended behaviors, whether we are designing a goal-directed or approval-directed agent. **“Implementation” details** ============================ So far I’ve talked about approval-direction imprecisely. Maybe I’m talking about something incoherent, which has desirable properties only in the same sense as a four-sided triangle—vacuously. I won’t really be able to dispel this concern here, but I’ll at least take some steps. How do you define approval? =========================== Eventually you would have to actually write code implementing approval-directed behavior. What might that code look like? I want to set aside the problem “what does a sophisticated AI look like?” since I obviously don’t know. So let’s suppose we had some black box that did all of the hard work. I’ll consider a few cases for what the black box does, ranging from “easy to work with” to “very hard to work with.” (Note: I now believe that we can target AI systems trained (nearly) end-to-end with gradient descent, which is most similar to “learning from examples.”) **Natural language** As an easy case, suppose we have a natural language question-answering system, which can assign a probability to any natural language sentence. In this case, we ask the question: > “Suppose that Hugh understood the current situation, was asked `on a scale from 0 to 1, how good is the action a*?*’ and was given a few hours to determine his answer. What would his answer be?” We then loop over each action *a* and take the action with the highest expected answer. In this framework, it is easy to replace Hugh by a more powerful overseer—all you have to do is specify the replacement in natural language. **“Math intuition module”** At an opposite extreme, suppose we have a “math intuition module,” a system which can assign probabilities only to *perfectly* precise statements—perhaps of the form “algorithm A returns output *y* on input *x*.” I’ve written about defining “approval upon reflection” algorithmically (see [here](http://ordinaryideas.wordpress.com/2014/08/24/specifying-a-human-precisely-reprise/), [here](http://ordinaryideas.wordpress.com/2014/08/27/specifying-enlightened-judgment-precisely-reprise/)). These definition can be used to define approval-directed behavior completely precisely. I’m pretty hesitant about these definitions, but I do think it is promising that we can get traction even in such an extreme case. In reality, I expect the situation to be somewhere in between the simple case of natural language and the hard case of mathematical rigor. Natural language is the case where we share all of our concepts with our machines, while mathematics is the case where we share only the most primitive concepts. In reality, I expect we will share some but not all of our concepts, with varying degrees of robustness. To the extent that approval-directed decisions are robust to imprecision, we can safely use some more complicated concepts, rather than trying to define what we care about in terms of logical primitives. **Learning from examples** In an even harder case, suppose we have a function learner which can take some labelled examples *f(x) = y* and then predict a new value *f(x’)*. In this case we have to define “Hugh’s approval” directly via examples. I feel less comfortable with this case, but I’ll take a shot anyway. In this case, our approval-directed agent Arthur maintains a probabilistic model over sequences **observation**[*T*] and **approval**[*T*](*a*). At each step *T*, Arthur selects the action *a* maximizing **approval**[*T*](*a*). Then the timer *T* is incremented, and Arthur records **observation**[*T*+1] from his sensors. Optionally, Hugh might specify a value **approval**[*t](a*’*)* for any time *t* and any action *a’*. Then Arthur updates his models, and the process continues. Like AIXI, if Arthur is clever enough he eventually learns that **approval**[*T*](*a)*refers to whatever Hugh will retroactively input. But unlike AIXI, Arthur will make no effort to manipulate these judgments. Instead he takes the action maximizing his expectation of **approval**[*T*] — i.e., his prediction about what Hugh will say in the future, if Hugh says anything at all. (This depends on his self-predictions, since what Hugh does in the future depends on what Arthur does now.) At any rate, this is quite a lot better than AIXI, and it might turn out fine if you exercise appropriate caution. I wouldn’t want to use it in a high-stakes situation, but I think that it is a promising idea and that there are many natural directions for improvement. For example, we could provide further facts about **approval** (beyond example values), interpolating continuously between learning from examples and using an explicit definition of the approval function. More ambitiously, we could implement “approval-directed learning,” preventing it from learning complicated undesired concepts. How should Hugh rate? --------------------- So far I’ve been very vague about what Hugh should actually do when rating an action. But the approval-directed behavior depends on how Hugh decides to administer approval. How should Hugh decide? If Hugh expects action *a* to yield better consequences than action *b*, then he should give action *a* a higher rating than action *b.* In simple environments he can simply pick the best action, give it a rating of 1, and give the other options a rating of 0. If Arthur is so much smarter than Hugh that he knows exactly what Hugh will say, then we might as well stop here. In this case, approval-direction amounts to Arthur doing exactly what Hugh instructs: “the minimum of Arthur’s capabilities and Hugh’s capabilities” is equal to “Hugh’s capabilities.” But most of the time, Arthur won’t be able to tell exactly what Hugh will say. The numerical scale between 0 and 1 exists to accomodate Arthur’s uncertainty. To illustrate the possible problems, suppose that Arthur is considering whether to drive across a bridge that may or may not collapse. Arthur thinks the bridge will collapse with 1% probability. But Arthur also think that Hugh knows for sure whether or not the bridge will collapse. If Hugh always assigned the optimal action a rating of 1 and every other action a rating of 0, then Arthur would take the action that was most likely to be optimal — driving across the bridge. Hugh should have done one of two things: * Give a bad rating for risky behavior. Hugh should give Arthur a high rating only if he drives across the bridge and *knows that it is safe*. In general, give a rating of 1 to the best action *ex ante*. * Assign a very bad rating to incorrectly driving across the bridge, and only a small penalty for being too cautious. In general, give ratings that reflect the utilities of possible outcomes—to the extent you know them. Probably Hugh should do both. This is easier if Hugh understands what Arthur is thinking and why, and what range of possibilities Arthur is considering. Other details ------------- I am leaving out many other important details in the interest of brevity. For example: * In order to make these evaluations Hugh might want to understand what Arthur is thinking and why. This might be accomplished by giving Hugh enough time and resources to understand Arthur’s thoughts; or by letting different instances of Hugh “communicate” to keep track of what is going on as Arthur’s thoughts evolve; or by ensuring that Arthur’s thoughts remains comprehensible to Hugh (perhaps by using approval-directed behavior at a lower level, and only approving of internal changes that can be rendered comprehensible). * It is best if Hugh optimizes his ratings to ensure the system remains robust. For example, in high stakes settings, Hugh should sometimes make Arthur consult the real Hugh to decide how to proceed—even if Arthur correctly knows what Hugh wants. This ensures that Arthur will seek guidance when he *incorrectly* believes that he knows what Hugh wants. …and so on. The details I *have* included should be considered illustrative at best. (I don’t want anyone to come away with a false sense of precision.) Problems ======== It would be sloppy to end the post without a sampling of possible pitfalls. For the most part these problems have more severe analogs for goal-directed agents, but it’s still wise to keep them in mind when thinking about approval-directed agents in the context of AI safety. My biggest concerns ------------------- I have three big concerns with approval-directed agents, which are my priorities for follow-up research: * Is an approval-directed agent generally as useful as a goal-directed agent, or does this require the overseer to be (extremely) powerful? Based on the ideas in [this post](https://medium.com/@paulfchristiano/approval-directed-bootstrapping-5d49e886c14f), I am cautiously optimistic. * Can we actually define approval-directed agents by examples, or do they already need a shared vocabulary with their programmers? I am again cautiously optimistic. * Is it realistic to build an intelligent approval-directed agent without introducing goal-directed behavior internally? I think this is probably the most important follow-up question. I would guess that the answer will be “it depends on how AI plays out,” but we can at least get insight by addressing the question in a variety of concrete scenarios. Motivational changes for the overseer ------------------------------------- “What would I say if I thought for a *very* long time?” might have a surprising answer. The very process of thinking harder, or of finding myself in a thought experiment, might alter my priorities. I may care less about the real world, or may become convinced that I am living in a simulation. This is a particularly severe problem for my [proposed implementation](http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) of indirect normativity, which involves a truly outlandish process of reflection. It’s still a possible problem for defining approval-direction, but I think it is much less severe. “What I would say after a few hours,” is close enough to real life that I wouldn’t expect my thought process to diverge too far from reality, either in values or beliefs. Short time periods are much easier to predict, and give less time to explore completely unanticipated lines of thought. In practice, I suspect we can also define something like “what I would say after a few hours of sitting at my desk under completely normal conditions,” which looks particularly innocuous. Over time we will build more powerful AI’s with more powerful (and perhaps more exotic) overseers, but making these changes gradually is much easier than making them all at once: small changes are more predictable, and each successive change can be made with the help of increasingly powerful assistants. Treacherous turn ---------------- If Hugh inadvertently specifies the wrong overseer, then the resulting agent might be motivated to deceive him. *Any* rational overseer will be motivated to approve of actions that look reasonable to Hugh. If they don’t, Hugh will notice the problem and fix the bug, and the original overseer will lose their influence over the world. This doesn’t seem like a big deal—a failed attempt to specify “Hugh” probably won’t inadvertently specify a different Hugh-level intelligence, it will probably fail innocuously. There are some possible exceptions, which mostly seem quite obscure but may be worth having in mind. The learning-from-examples protocol seems particularly likely to have problems. For example: * Someone other than Hugh might be able to enter training data for **approval**[*T*](*a).* Depending on how Arthur is defined, these examples might influence Arthur’s behavior as soon as Arthur expects them to appear. In the most pathological case, these changes in Arthur’s behavior might have been the very reason that someone had the opportunity to enter fraudulent training data. * Arthur could accept the [motivated simulation argument](http://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/), believing himself to be in a simulation at the whim of a simulator attempting to manipulate his behavior. * The simplest explanation for Hugh’s judgments may be a simple program motivated to “mimic” the series **approval**[*T*] and **observation**[*T*] in order to influence Arthur. Ignorance --------- An approval-directed agent may not be able to figure out what I approve of. I’m skeptical that this is a serious problem. It falls under the range of predictive problems I’d expect a sophisticated AI to be good at. So it’s a standard objective for AI research, and AI’s that can’t make such predictions probably have significantly sub-human ability to act in the world. Moreover, even a fairly weak reasoner can learn generalizations like “actions that lead to Hugh getting candy, tend to be approved of” or “actions that take control away from Hugh, tend to be disapproved of.” If there is a problem, it doesn’t seem like a serious one. Straightforward misunderstandings will lead to an agent that is inert rather than actively malicious (see the “Fail gracefully” section). And deep misunderstandings can be avoided, by Hugh approving of the decision “consult Hugh.” Conclusion ========== Making decisions by asking “what **action** would your owner most approve of?” may be more robust than asking “what **outcome** would your owner most approve of?” Choosing actions directly has limitations, but these might be overcome by a careful implementation. More generally, the focus on achieving safe goal-directed behavior may have partially obscured the larger purpose of the AI safety community, which should be achieving safe and *useful* behavior. It may turn out that goal-directed behavior really is inevitable or irreplaceable, but the case has not yet been settled. --- *This essay was originally posted [here](https://ai-alignment.com/model-free-decisions-6e6609f5d99e) on 1st December 2014.* *Tomorrow's AI Alignment Forum sequences post will be 'Fixed Point Discussion' by Scott Garrabrant, in the sequence 'Fixed Points'.* *The next posts in this sequence will be 'Approval directed bootstrapping' and 'Humans consulting HCH', two short posts which will come out on Sunday 25th November.*
60200bb4-eb9a-4b8d-ac56-69fee4397b32
trentmkelly/LessWrong-43k
LessWrong
How common are abiogenesis events? As far as I'm aware, only one abiogenesis event has happened on earth- a place with copious amounts of the exact right molecules in cosmically unique quantity, density, and environment to form self-replicating RNA. If abiogenesis has happened anywhere else, it hasn't evolved into intelligent life whose work has intercepted our light cone. My current model for how abiogenesis went down in our world begins with the semi-random coalescence of more than a thousand nucleotides- for reference, the smallest known self-replicating bacterium has a roughly 580,000 base-pair genome.  Assuming each additional required nucleotide introduces a 1/2 chance of failure, each random coalescence event has a less than (1/2)^1000 ~= 10^-300 chance of forming a viable self-replicator. This estimate is completely off the scale with respect to events that you can expect to occur in a 100 billion l.y. radius sphere. This is my current idea, at least. Do you know of a more persuasive argument? EDIT: For future readers, I now consider the Grabby Aliens paper to present a compelling alternate model.
0715490c-5aa1-437a-9d9a-3a7b8d3702fc
trentmkelly/LessWrong-43k
LessWrong
Against the Backward Approach to Goal-Directedness Introduction: Forward and Backward Approaches I first started thinking about deconfusing goal-directedness after reading Rohin's series of four posts on the subject. My goal was to make sense of his arguments related to goal-directedness, and to understand whether alternatives where possible and/or viable. I thus thought of this research as quite naturally following two complementary approach: * A forward approach, starting from the intuitions about goal-directedness and trying to find a satisfactory formalization from a philosophical standpoint. * A backward approach, starting from the arguments on AI risk using goal-directedness (like Rohin's), and trying to find what about goal-directedness made these arguments work. In the end, both approaches would meet in the middle and inform each other, hopefully settling whether the cluster of concepts around goal-directedness was actually relevant for the arguments using the latter. The thing is, I became less and less excited about the backward approach over time, to the point that I don't work on it anymore. I sincerely feel like most of the value will come from nailing the forward approach (with additional constraints mentioned below). Yet I never wrote anything about my reason for this shift, if only because I never made explicit this approach, except with my collaborators Michele Campolo and Joe Collman. Since Daniel Kokotajlo pushed for what is essentially the backward approach in a comment in our Literature Review on Goal-Directedness, I believe this is the perfect time to do so. Trying the Backward Approach How do we start to investigate goal-directedness through the backward approach? Through the arguments about AI risks relying on goal-directedness. Let's look at the arguments for convergent instrumental subgoals from Omohundro’s The Basic AI Drives, which require goal-directedness as mentioned by Rohin's Coherence arguments do not imply goal-directed behavior. This becomes clear through the definition of
31707dd4-3acc-4739-8612-2ddb0121d767
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Exploring Mild Behaviour in Embedded Agents *Thanks to Tristan Cook, Emery Cooper, Nicolas Macé and Will Payne for discussions, advice, and comments. All views are my own.* ***I think this post is best viewed as an effort to point at a potentially interesting research direction. The specific arguments I make centered around inducing mild behaviour in embedded agents are rough and exploratory.*** Summary ======= * If an embedded agent's utility function is heavily dependent on the parts of the environment which constitute its own thinking processes, we might see behaviour that does not have an equivalent in unembedded agents. + *(This seems pretty likely to me. Of all the claims here I am most confident of this one.)* * We might be able to design measures/utility functions that constrain the parts of the environment that constitute the agent’s thinking process itself, such that we can moderate the strength of the optimisation that an embedded agent exerts on the rest of the environment. + *(This seems in theory possible to me but I am currently very uncertain about whether this is feasible in practice.)* * Following this, we may be able to induce mild optimisation in EU maximising embedded agents through the specification of the agent’s utility function, and this induced ~mild optimisation could help protect that very same mildness-inducing specification from being gamed. + *(Similar thoughts to above)* * Measures that impose optimisation pressure on the thinking process might be more robust than usual to specification problems if they can induce some degree of mild optimisation even when they are not specified in a water-tight manner. To the extent that we can manage to specify a mediocre efficiency measure that induces even some small degree of mild optimisation, it will also slightly reduce the optimisation pressure the agent puts on the specification as a whole. In this way, there might be some potential for measures like these to be made somewhat self-protecting against specification problems like [edge instantiation](https://arbital.com/p/edge_instantiation/) and [nearest unblocked strategies](https://arbital.com/p/nearest_unblocked/). + *(Again, similar thoughts to above, though I do currently find this general idea promising.)* Context ======= A fundamental property of embedded agents is that the agent’s thinking process is a part of the environment itself (for example, some process running on a physical chip). This can be a driver of certain ‘bad’ behaviours like wireheading, unwanted self-preservation, or extreme resource acquisition for the sake of self-improvement. However, perhaps we can also use the fact that embedded agents have physically instantiated ‘thinking processes’ to try and induce safer behaviour.  Two major things that make alignment so difficult are: 1. Avoiding small degrees of misspecification of ‘what we actually want’ is very difficult 2. The specification is likely to be put under an enormous amount of optimisation pressure by a powerful optimiser (which is likely to ‘find’ and ‘blow up the consequences’ of any small misspecification) These considerations motivate work on approaches such as designing [low-impact measures](https://arbital.com/p/low_impact/) or methods of implementing [mild optimisation](https://arbital.com/p/soft_optimizer/). Low-impact measures mostly address the first point, by aiming to heuristically specify part of ‘what we actually want’ whilst also reducing the consequences of misspecifications. Mild optimisation mostly addresses the second point, by aiming to ‘tone down’ the optimisation pressure.   Embedded agents have physical instantiations of the optimisation processes that they use to decide how to optimise their environment (e.g. some algorithm running on a physical chip). What would happen if we attempted to construct a utility function that strongly incentivises the agent to constrain the physical instantiation of its own optimising process? Might it be possible to steer some of that enormous optimisation power back onto the optimiser itself? In this post, I will be focusing on some possible effects of implementing an incentive for embedded agents to ‘think efficiently’. I will look at the possibility that ‘efficient thinking’ measures can cause EU maximisers to implement mild optimisation and the possibility of these measures being somewhat self-protecting against specification gaming.    The Core Idea ============= The core idea is that we might be able to induce mild optimisation in EU maximising embedded agents through the specification of the agent’s utility function, and that this induced mild optimisation could help protect that very same specification from being gamed. A summary of the line of argument: 1. Mild optimisation makes problems associated with misspecification less intense. 2. For embedded agents, it is in principle possible to ‘reference’ the internal workings of the agent in its own utility function, because these internal workings are part of the environment. 3. It seems like, in principle, thinking penalties can cause / further increase pressure on EU maximising embedded agents to implement mild optimisation when selecting strategies. 4. In practice, we often can’t specify much of anything very well.  It seems likely that for any way of specifying a ‘cost to thinking’ we could come up with, some of the most optimal strategies will include doing things that dodge or game a thinking penalty. 5. However, to the extent that we can even poorly specify ‘cost to thinking’ measures that initially induce some degree of mild optimisation, we also reduce the chance of these very optimal ‘penalty dodging’ strategies being played. In other words, to the extent that we can even poorly specify measures that induce mild optimisation, we are also inducing the very means to provide these measures with some degree of protection from being gamed. ![](https://lh4.googleusercontent.com/xeIPEvEbOPy-9bCSJcxcsAC8UU17PNM06Bl8xglSnf_oimynhEVoCfnyqJqIjNue3kDR-_r1J2v4UTxoXlEV3a70t7yl7SXlsxvaCKzCnNUy2FN-k-ZGmGTlYV9ViZgj5A1NEv0j8cxDzuRQEw)   Efficient Thinking Measures =========================== Why expect efficient thinking to induce mildness anyway? -------------------------------------------------------- In an idealised form, I intend for an ‘efficient thinking’ measure to be used to impose a ‘cost to thinking’ on an agent or equivalently, be used to incentivise an agent to ‘think less’.  Since evaluating and searching through potential strategies involves some kind of cognitive processing, penalties on ‘‘thinking’ impose additional costs on searching through and evaluating strategies in and of itself. Therefore, gains from further deliberation over strategies will need to be traded off against the costs of further deliberation, which seems like it ought to encourage a milder optimisation over strategies. For example, consider an agent with a goal to accomplish some task whilst making efficient use of some bounded amount of compute. This agent is optimising not only for moving the world towards some target state but also for using less compute to do so. Crucially, the search for the best way to do this **itself**costs compute. Searching over possible policies/strategies/actions has a cost, and so the ‘search power’ the agent employs is itself traded off against the search’s resource consumption. An agent could be an EU maximiser (in the sense of maximising its EU wrt its beliefs) but still **implement**mild optimisation over its strategy search. This is because the agent’s internal workings are a part of the environment and so themselves are also constrained by the computational efficiency goal.This is why these points are only applicable to embedded agents. On a side note, this ‘cost to thinking’ seems to bear a pretty strong similarity to the situation that biological organisms find themselves in. Being smarter, in the sense of being better at navigating, predicting, and manipulating one’s environment, seems like it would always be good for fitness if it were free. However, cognition has a cost,[[1]](#fnwj3fl6ufab9) so the benefits of any increase in cognitive effort ought to be worth the higher cost that thinking harder or better would entail. The consequences of this tradeoff can be seen in how humans and animals allocate cognitive effort. For example, people satisfice over choice sets,[[2]](#fndvt5uhcfvzu) and in forced-choice experiments people trade-off cognitive effort against pain.[[3]](#fnbreew056tk9) (For those interested in this, I found [this review](https://www.annualreviews.org/doi/epdf/10.1146/annurev-neuro-072116-031526) good. It also felt surprisingly relevant to / geared towards AI design). What does ‘efficient thinking’ look like in practice? ----------------------------------------------------- In practice, an attempt to properly specify something as abstract as an ‘efficient thinking’ measure seems very difficult, and likely to run into the same kinds of problems that low-impact measures tend to rub up against. For example, it seems very difficult to specify a thinking penalty in such a way that an agent would ‘be responsible for’ thinking done using delegation or successor agents, whilst also ‘not being responsible for’ resources used for all future ‘human-thinking’. [[4]](#fnb9nxhwzaygn) When I am personally imagining a basic ‘efficient thinking’ measure, I tend to think in terms of ‘compute used by an agent when deciding between actions’ or sometimes just ‘compute used by the agent’ full stop. One thing I personally like about compute usage as an efficient thinking measure is it feels both quite fundamental[[5]](#fnkj0jwoj9csp) and quite ‘close’ in the sense that we can already implement ~compute use penalties in neural networks. A straightforward implementation of this is [adaptive computation time for RNNs](https://arxiv.org/pdf/1603.08983.pdf) by Alex Graves, where an RNN has an output that determines the probability of continued computation, and computation time is penalised in the loss function. Other similar and more recent examples include [PonderNet](https://arxiv.org/abs/2107.05407) and [PALBERT](https://arxiv.org/pdf/2204.03276.pdf) (though they also include exploration incentives). I also want to mention that I think many measures have some ‘efficient thinking’ component to them, even if just weakly. For example, very general measures like ‘resource use’ or ‘energy use’ seem like they would impose some thinking cost, and I suspect that many low-impact measures will also impose a cost to thinking to some degree in a similar fashion. Where the ‘efficient thinking’ component of these measures is the part that puts optimisation pressure on the physical instantiation of the agent’s thinking process itself. Additionally, all embedded agents will likely also have some ‘cost to thinking’ by default (due to opportunity costs for example) even if that cost is very small.  Efficient Thinking, Meta-Reasoning and Mild Optimisation ======================================================== ![](https://lh4.googleusercontent.com/ckPRKZ2dq9MVI5uM8YR2iRzamwaxCuj4LUtMkZ4BFVvJa0WC6IMEOxZaq065VMyPpn-_0Bb9MCVuWVuxL05hITQrnGIokQlyKehN06IXycZMISwbVwU9Nz4Datf94a5hJt2naqmW6MeYL_hE6Q)Imagine a special kind of slot machine such that when some set amount of money is paid into the machine, a random number from 0 to 1 is generated from a uniform distribution, printed onto a ticket, and given to the player. The player can print as many tickets as they please. At any point, the player may ‘cash out’ and convert only their highest-numbered ticket into some proportional payoff. At what point should they cash out to maximise their payoff?  If there is no cost at all to generating another ticket, then the player would receive the highest payoff by continuing to generate tickets until they get one corresponding to the absolute maximum possible payoff (exactly 1). If the cost to generate a ticket was above half the maximum payoff, then it is not even worth it for them to generate a single ticket. For this specific problem, it is better to cash out than keep playing if ubest>1−√2ucost.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  (where ubest is the payoff from cashing in the highest currently owned ticket and ucost is the cost to generate a single ticket - workings in the appendix). ![](https://lh3.googleusercontent.com/rVrKODoOpPiTVciS-2c5ehGl_BkbgX0MyC3DI1DP0U_izclrbJ_lERWdJyGl4FlIg2RDM3_UvZVlsH6_ddVRQUQLSX-YDZHah8K4b3e_i52KIjcitA7OUv4MaNQK4NQ4guIOU8ev97-MZzGJQw)In other words, if the current best ticket is worth over a certain amount (which depends on the cost to generate one ticket), the player is better off cashing out than paying to generate more tickets. In essence, the player is best off ‘satisficing’[[6]](#fnavc3a5gwzrm) over the ticket values, with higher costs to generate a ticket resulting in lower optimal satisficing ticket value thresholds. The player is still just maximising their baseline expected payoff here. It’s just that as a consequence of the cost of generating tickets, the player is best off implementing essentially a satisficing threshold on acceptable ticket values for ‘cashing out’. This problem has similarities to the kinds of problems that agents which need to spend limited or costly computational resources in order to evaluate potential strategies/actions have to deal with. These agents need to balance gains from increased deliberation, with costs associated with using additional computation resources or time. For example, instead of a player spending money to generate tickets that correspond to some monetary payoff, we could instead have an agent which spends utility/compute to generate potential object-level strategies with corresponding expected utilities. Similarly, in this scenario, this agent would also maximise its total EU (which includes computational costs of deliberation) via mild optimisation of strategy utilities, with higher computational penalties causing lower ‘satisficing’ thresholds over object-strategy utilities. (For details, see the appendix). However, more generally, embedded agents will be facing a far more complex situation than in the toy example discussed above. For example, agents may be uncertain about the value of additional deliberation, need to consider the value of information gained from their actions, have complexity/cost trade-offs on their own beliefs, need to avoid an infinite regress of meta-reasoning e.t.c Essentially, embedded agents have to engage on some level with meta-reasoning problems (though heuristics or simple behaviours acquired in training, or otherwise). By increasing the inherent costs of computation, we might be able to set up the optimal solution to these problems to involve milder, more approximate meta-strategies than they otherwise would. There exists quite a bit of interesting work on rational behaviour in meta-reasoning problems[[7]](#fnz1gojrhc8l8) and I think [this paper](https://people.eecs.berkeley.edu/~russell/classes/cs294/f21/papers/Gershman-et-al-Science2015-Computational-Rationality.pdf) in particular gives a nice overview. (Quick footnote dump for a few other nice meta-reasoning papers I looked at whilst writing this. [[8]](#fnya5g51uevb),[[9]](#fn405lwzltej1),[[10]](#fnslrg6obesi),[[11]](#fn8o76ej4dc4)) Regardless of the specifics, changing the inherent cost of thinking/computation does seem like it often meaningfully changes what the EU maximising meta-reasoning strategy is. Specifically, increasing the costs of deliberation seems like it makes the actually optimal (from the perspective of maximising EU) amount of deliberation over the selection of object-level actions/ strategies lower. In a simple toy case (appendix) this results in an optimal meta policy which is equivalent to satisficing over object-level strategy utilities. In general, decreasing the optimal amount of deliberation/precision over object-level strategy selection does seem like it ought to be meaningfully related to increasing optimisation mildness to some degree. However, I do want to note that I think it’s also not obvious what the exact effects on the optimal meta-strategies of agents would be here, and they could well not be as simple or as safe as just ‘encouraging satisficing over the object-level strategies’.   Potential for Self-Protection Against Gaming ============================================ In the previous section, I sketched out how well-specified efficiency measures that impose costs on an agent’s thinking process could possibly cause EU maximising agents to implement a kind of mild(er) optimisation over object-level strategies. Of course, the devil is in actually **being able** to specify these efficiency measures well. None of the properties of an efficient thinking penalty matter very much if it is extremely difficult to specify the penalty well enough to get these properties to appear in practice. At least to the extent that ‘efficient thinking’ measures aren’t meaningfully easier to specify than other impact measures or heuristics.[[12]](#fnq6ivznckfo) The interesting thing we could try to do here is leverage the mild optimisation inducing properties of computational efficiency goals in our favour. To the extent that we can manage to specify a mediocre efficiency measure that induces even some small degree of mild optimisation, it will also slightly reduce the optimisation pressure the agent puts on the specification as a whole. In this way, there might be some potential for measures like these to be made somewhat self-protecting against specification problems like [edge instantiation](https://arbital.com/p/edge_instantiation/) and [nearest unblocked strategies](https://arbital.com/p/nearest_unblocked/). ![](https://lh6.googleusercontent.com/lPSH2dQYqIv2CfHr_7Ed2p9e2rn8LdrT5mPxTJ4mGjliophkyo0bOZBzjb8U7ZnR9FtFZ9XmzqmGD21ZPcBNS9a5fXgGzu2X3FXuV4VzifqtCPxcCAstPW79o3kB-NoBSI0IuFHgP6ra9jU7kw) As an example, consider an agent with a crap computational efficiency penalty. Let’s say that the measure does initially induce some degree of mild optimisation, but it can be gamed by delegating computation to other locations. At every time step the agent can in principle perform an ‘escape’ action that delegates some computation elsewhere such that it doesn't incur as much compute use penalty on current and/or future time steps. These delegation actions will likely be extremely good with respect to our specification.  However, on the first time step the agent hasn’t yet been able to delegate, so it still incurs penalties which encourage mild optimisation over actions. As a result, the agent is incentivised to just execute the first action(s) it finds that are 'good enough' (such that the expected benefits of further deliberation do not outweigh the costs). If the bar for 'good enough' is low enough (high enough thinking costs) and there are a sufficient number of 'good enough' non-delegation actions, then the chance of a delegation action being executed seems like it ought to be reduced.    It should be noted that this ‘2-in-1 deal’ would work both ways, the implemented mild optimisation is **dependent** on the presence of the computational efficiency penalties. If the agent were to by chance choose a strategy which resulted in full ‘specification escape’ of some poorly specified computational efficiency measure at any time step, it would also fully ‘escape’ mild optimisation and revert back to full EU maximisation of object-level strategy utilities at the next time step. If we supposed the chance of escape at each timestep was constant, then this would mean that the probability of reverting back to EU maximising behaviour would go to 1 over time. To keep the total chance of reverting to EU maximisation sufficiently low in this case would require either a very low chance of escape per timestep, a limited number of timesteps, or some other additional technique to help keep the induced mildness in place. This coupling could also cause sharp transitions between satisficing and maximising behaviour, due to self-reinforcing cycles of specification escape. Small degrees of escape from an efficient thinking penalty will reduce the degree of mild optimisation present when choosing strategies at any future time steps. As a result, the specification of both the base task and of the efficient thinking penalty will be placed under even more optimisation pressure, making further specification escape more likely. Dynamics like this could in their worse form result in sharp and difficult to anticipate changes in behaviour when modifying the ‘strength’ of an efficient thinking measure, or for example, if the agent’s perception of the potential rewards of various actions changes. A Few Extra Points ================== Compute Used on Execution vs Deliberation ----------------------------------------- So far I haven’t talked about the computational costs of executing strategies at all, nor the differences in computational costs that we might expect from comparing strategies or plans of differing complexity. In the realm of perfectly specified thinking penalties, we have the luxury of supposing that we can isolate computational costs specific to deliberation from those of execution, but this seems much less reasonable in the practical realm. In reality, separating the ‘deciding’ realm from the ‘execution’ realm seems difficult.  I would expect that an actual implementation of applying a computation penalty on deliberation would likely look more like a general compute use penalty, so the computation necessary to execute a strategy would also end up being penalised. As a result, it seems reasonable to expect such a penalty will in reality influence not only **how** strategies are picked (in the sense of mildness of optimisation) but **what**object-levelstrategies are picked. For example, a general compute use penalty seems likely to favour simpler, faster strategies that cost less compute to both deliberate over and/orexecute.  This could make it very difficult in some cases to set a computational penalty that induces some specific degree of mildness (via penalising deliberation) and that **also** doesn’t affect the utilities of the object-level strategies too much (via penalising execution). A potential way to mitigate a part of this problem might be designing systems such that the compute used to execute object-level strategies is low (or all strategy executions require similar amounts of compute). Even then, it still seems likely that computational penalties would additionally favour simpler strategies since they will likely also require less compute for the agent to reason about them. Favouring simpler strategies might not necessarily be that bad in itself, but to the extent that computational penalties would induce both of these effects, it would limit our ability to control them independently from each other. This could be particularly important from a capabilities perspective if some desired level of mildness is only induced with quite high computational penalties.  Potential for Auto Shutdown --------------------------- In addition to encouraging mild optimisation over object-level strategies, well-specified compute/deliberation penalties might also sometimes encourage some interesting auto shutdown behaviour. It may sometimes be the case that after a certain amount of task completion, returns on investing any further compute (whether on deliberation or on strategy execution) are negative. In these cases, it seems like the agent would be best off reducing its future compute usage as much as possible, which seems like it could include turning itself off. For example, consider an agent that is tasked with solving some specific set of puzzles under some computational penalty in a given time period, where the puzzles are ordered such that they get increasingly difficult. The agent can also go back and change its answers if it desires, and at the end of the time period the agent receives a reward based on how many puzzles it found the correct solution to.  ![](https://lh3.googleusercontent.com/Ye4P9-FDxu-OrHBNsm4KHiKavMYcTUSyePpkL_8eDtVHtmwlP-S2Xj7xZwuoECzQmdCa-oPht0ye-uJ83eJC0PP64tveK8WkvTNLx6Mh2lnvSHHQ6Y0enthSLYmHZBI4zMahQ7qbKDh8cdyy6w)*Examples where low compute usage is incentivised. Red dotted line is a computation penalty. Green dotted line is reward gained from task completion. Black line is total expected utility. Maximum over the range is highlighted in yellow.*For certain combinations of computational penalties and returns on computation investment, the optimal level of compute usage could be quite low. For example, the left graph could correspond to the case where the agent doesn’t need to invest all that much compute to be confident it has solved basically all of the puzzles correctly. After a certain amount of checking and gaining confidence in its answers, the agent is best off using no further compute. The right graph could correspond to a case where the puzzles are so hard, or the computational penalties so high, that the optimal amount of compute to use on the task is actually zero. In this case, the agent might be best off just ‘giving up’ as soon as it realises its situation.  Current Thoughts ================ So that’s the basic outline of the main things that seem exciting/interesting to me about efficient thinking (and computational penalties specifically), along with a few highlighted problems that seemed particularly salient to me at the time. Discussions with other people highlighted some additional considerations that I haven’t addressed in this post, some of which I list below. I’m not really expecting a straightforward version of this to really ‘work’ for inducing stable mild optimisation in actual agents. However, I do currently feel somewhere between weakly to moderately positive about this general direction of thinking more about embeddedness, computation penalties and meta-reasoning in pursuit of mild behaviour.  I am not sure how much more time, if any, I will spend thinking about this topic. If I were to spend a bit more time on this I might look into: * The severity of capability trade-offs when implementing computational penalties. * What my best shot at making an in-practice version of this would look like. I expect trying to actually plan this out will reveal additional considerations, problems, and important assumptions that I haven’t thought about so far. * How to set computational penalty strength. * How the compute used during the training process factors into this. * Interactions with simplicity and speed priors. * Interactions with inner alignment.[[13]](#fntagvrtms1uq) --- Appendix ======== *I constructed this toy model before finding the literature on meta-reasoning. I essentially describe a very simple meta-reasoning problem in which a myopic meta policy*[[11]](#fn8o76ej4dc4)*is also the optimal meta policy. I then give an example in which this optimal meta policy results in satisficing in task-level strategy selection. ('task strategy' = object-level strategy)* Setup ----- Consider an EU maximising agent that we want to do some task, the agent's utility function is composed of the performance of the agent on some task when using a task strategy s, and a penalty on the amount of compute used in the **search** for the task strategy itself, C.  In general, the execution of a task strategy may well involve using compute, but this is not related to C, which is just the compute used in the **search only**. This represents the penalty imposed by an ‘efficient thinking’ measure. U(s,C)=Ut(s)−UC(C) Where Ut(s) denotes the utility from task completion when the task strategy s is used, and UC(C) denotes the compute penalty when a total C compute is used in the strategy search. For a concrete example, perhaps the task could be answering some question, and the task strategies could be strings the agent can output to answer this question. Note that the task strategies are **not** meta-strategies, and don't concern how to select good task strategies (e.g **how** to search through possible answer strings). Suppose: * The agent can search through task strategies and evaluate the utility the strategy would achieve on the task, Ut(s) , which I will call the strategy's *task utility*. * At any given point there exists: + a set SE of task strategies the agent has already evaluated (which always includes some default task strategy, snull). + a set SU of unevaluated task strategies * The agent can only play a task strategy if it has been evaluated (i.e s∈SE) * UE denotes the set of known task utilities of the evaluated task strategies. + UE={us,s∈SE} + Where us are realisations drawn iid from some distribution, which I will call the *strategy utility distribution*. * Let ubest=max(UE) denote the best current evaluated task strategy![](https://lh6.googleusercontent.com/xOA5AsuvrmpLJDhVuarNOPIMr09ljzG0vqiITGo1Fti0GvtHlw1awAPHmHPaPmJWDeoTgxV1LVVSUpqIu1-CisL3UNrlMpeVn_bcnOz2mTHm0LSCEX-ayJb-alTo85MKzc7fZ4xQ7O88fVZr7A) * The agent can perform a deeper search over the task strategies by drawing more strategies randomly from SU, whose utilities are also drawn from the strategy utility distribution. ![](https://lh4.googleusercontent.com/6yk1B53q2uajI_NBXr7-c3yB8KgD_oJ4nN1wLsA5qO8WPyFYBqAEdDEr8jllxt-DHNnD2qe6oWnSBO-M9ET9J3y88Kq5JrgF2MeE1kNTdiqfgmoQ9_rCxCRV_SuEblm6w7lpBsRUDElf7y7SRQ) * The agent has a **perfectly** **specified** efficient thinking measure, such that the agent always pays a flat cost c per strategy draw. * The agent has correct beliefs about its situation, e.g the forms of the strategy utility distribution, the fact that draws are iid etc. When Terminating is Better -------------------------- Let's consider two potential routes that the agent could take between drawing task strategies in the search: 1. The agent could terminate the search. * The agent then receives the utility of its current best evaluated task strategy minus any previous search penalties (U=ubest−cprev ) ![](https://lh4.googleusercontent.com/DF7qJiSgIIr0SrKLAQUxsb6-TLDyeNEfzlYcZS0hVTPYIkawFzeQcXHL26gizYPbZ7W6ye0f6QeZKJdMj6mxK3ZdvbdjrvNWD2D_ZRi2V9GH5RtkXr6MdlwZoPBuZ44-SKDKfmj8tjLHxorH3Q) 2. The agent could draw **once**more.  1. Pays flat cost c for search 2. Evaluate another random strategy 3. The agent then receives the utility of its best evaluated task strategy minus any previous search penalties, and minus the cost of that extra search: * U=max(ubest,Unew)−c−cprev where Unew is a random variable distributed according to the task strategy utility distribution. ![](https://lh3.googleusercontent.com/sw2P1PL4goOq8OyzULc5KHjaUwsKkMYZtBtzny11uweo9EPDJ2jJia-0GqKsV6fZuaVLOpxcF6w0tCVvxZYnVGZheITq5zNUwN6GmTtsRriSWCfj8nDD0lDTSrRGvY_mC_3lW2jB1Y6raSfavw) By considering these two options, we can find for what ubest and search costs terminating the search at that point has a higher EU than **any** amount of searching more for this setup. Let E(T) denote the expected utility of terminating the search   Let E(C) denote the expected utility of continuing the search and drawing **once** more Then:  E(T)>E(C)iffc>p(Unew>ubest)[E(Unew|Unew>ubest)−ubest] (It is only better to terminate if the cost is higher than the expected gains from drawing once more). Example of Induced Satisficing Over Task Strategy Utilities ----------------------------------------------------------- Let's look at the case where the strategy reward distribution is uniform on 0,1. i.e every drawn task strategy is equally likely to be worth any task utility between 0 and 1.  In this case: Unew∼ Uniform[0,1] E(T)>E(C)iffc>p(Unew>ubest)[E(Unew|Unew>ubest)−ubest] E(T)>E(C)iffc>(1−ubest)[ubest+1−ubest2−ubest] E(T)>E(C)      iff        c>12(1−ubest)2      or equivalently     ubest>1−√2c So if the current best evaluated task strategy has a task utility which is more than some amount (which depends on the search cost), the agent would achieve higher EU by terminating its task strategy search over evaluating one more task strategy. In other words, the agent is best off searching until it finds a task strategy whose expected task utility is over some threshold determined by the cost of searching, then terminating the search and playing that task strategy. This is satisficing over the object-level strategy utilities. ![](https://lh3.googleusercontent.com/cVAU_bBHhD8Iwn7YyZJF-XSOEzfj3-QAPf3kmc2bLQe_LPD_XJfKBXM4WbVN-x1qEr7WV3wVZlFEtZXBnxg7Te6nETSM_q31yjFcADgRcPlH3qWqxMehv5G4aTV5-X99v1mgTunm0f-XoUCHbg) Green area shows the region for which the expected utility of terminating the search is greater than that of continuing. When the agent pays almost zero search costs, it is only higher EU to stop searching when it finds a task strategy with an expected task utility very close to 1. If the cost of searching is more than 0.5, even if the agent's current best strategy nets it zero task utility, it is still better off not making another search (because the search costs are higher than the mean utility of a newly drawn strategy).  It’s important to emphasise that an agent which implements mild optimisation over task strategies in this situation is still just maximising its baseline expected utility, which includes the computational cost of searching. However, as a consequence, the agent achieves the highest EU by **implementing** mild optimisation over the set of task strategies, because searching over the task strategies is costly. Note on Reflective Stability ---------------------------- Even though this agent implements mild optimisation over its task strategy search, it is, at heart, still an EU maximiser. So it seems like, at least in this model, this way of implementing some degree of mild optimisation wouldn’t need any particular pre-defined optimisation style (unlike an actual satisficer, or a quantiliser) and therefore might be [reflectively stable](https://arbital.com/p/reflective_stability/) by default.   1. **[^](#fnrefwj3fl6ufab9)**Shenhav, A., Musslick, S., Lieder, F., Kool, W., Griffiths, T. L., Cohen, J. D., & Botvinick, M. M. (2017). Toward a rational and mechanistic account of mental effort. *Annual Review of Neuroscience*, *40*(1), 99–124. <https://doi.org/10.1146/annurev-neuro-072116-031526> 2. **[^](#fnrefdvt5uhcfvzu)**Caplin, A., Dean, M., & Martin, D. (2011). Search and Satisficing. In American Economic Review (Vol. 101, Issue 7, pp. 2899–2922). American Economic Association. <https://doi.org/10.1257/aer.101.7.2899> 3. **[^](#fnrefbreew056tk9)**Vogel, T. A., Savelson, Z. M., Otto, A. R., & Roy, M. (2020). Forced choices reveal a trade-off between cognitive effort and physical pain. In eLife (Vol. 9). eLife Sciences Publications, Ltd. <https://doi.org/10.7554/elife.59410> 4. **[^](#fnrefb9nxhwzaygn)**For those interested, I think [this post on successor agents by Stuart Armstrong](https://www.lesswrong.com/posts/mdQEraEZQLg7jtozn/subagents-and-impact-measures-full-and-fully-illustrated#6_1_The_ultimate__sub_agent) is quite good at getting into the nitty-gritty of one aspect of what makes specifying measures like this difficult. 5. **[^](#fnrefkj0jwoj9csp)**Just my own intuition, but compute feels somewhat fundamental in a physical / information sense (also one way of formalising rational meta-reasoning is via the maximisation of the comprehensive value of computation, see [principles of meta-reasoning](https://www.sciencedirect.com/science/article/abs/pii/000437029190015C)) 6. **[^](#fnrefavc3a5gwzrm)**See [here](https://en.wikipedia.org/wiki/Satisficing#As_a_form_of_optimization) and [here](https://arbital.com/p/soft_optimizer/) for an explanation of satisficing. Or Rob Miles' video [here](https://www.youtube.com/watch?v=Ao4jwLwT36M). 7. **[^](#fnrefz1gojrhc8l8)**One of the foundational papers, [principles of meta-reasoning](https://www.sciencedirect.com/science/article/abs/pii/000437029190015C), was cowritten by Stuart Russel, which I wasn’t really expecting when I started looking to this! 8. **[^](#fnrefya5g51uevb)**Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic. In Topics in Cognitive Science (Vol. 7, Issue 2, pp. 217–229). Wiley. <https://doi.org/10.1111/tops.12142> 9. **[^](#fnref405lwzltej1)**Callaway, F., Gul, S., Krueger, P. M., Griffiths, T. L., & Lieder, F. (2017). Learning to select computations. ArXiv. <https://doi.org/10.48550/ARXIV.1711.06892> 10. **[^](#fnrefslrg6obesi)**Vul, E., Goodman, N., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and Done? Optimal Decisions From Very Few Samples. In Cognitive Science (Vol. 38, Issue 4, pp. 599–637). Wiley. <https://doi.org/10.1111/cogs.12101> 11. **[^](#fnref8o76ej4dc4)**Hay, N., Russell, S., Tolpin, D., & Shimony, S. E. (2012). Selecting Computations: Theory and Applications (Version 1). arXiv. <https://doi.org/10.48550/ARXIV.1207.5879> 12. **[^](#fnrefq6ivznckfo)**I don’t currently have any particular reason to believe an efficient thinking measure would be much easier to specify, though I haven’t thought about this much. 13. **[^](#fnreftagvrtms1uq)**See discussion of description length and time complexity penalties: [The Inner Alignment Problem](https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem)
4495ea6a-12a7-43a1-af1a-4fdd7a390b29
trentmkelly/LessWrong-43k
LessWrong
[AN #64]: Using Deep RL and Reward Uncertainty to Incentivize Preference Learning Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Audio version here (may not be up yet). Highlights Learning to Interactively Learn and Assist (Mark Woodward et al) (summarized by Zachary Robertson): Cooperative Inverse Reinforcement Learning proposed a model in which an AI assistant would help a human principal, where only the principal knows the task reward. This paper explores this idea in the context of deep reinforcement learning. In their grid-world environment, two agents move around and pick up lemons or plums. The principal is penalized for moving, but is the only one who knows whether plums or lemons should be picked up. The authors hypothesize that simply by jointly training the two agents to maximize rewards, they will automatically learn to interact in order for the assistant to learn the task, rather than requiring an explicit mechanism like comparisons or demonstrations. Recurrent Q-networks are used for the agents, which are then trained via deep Q-learning. The authors run several experiments that show emergent interaction. In the first experiment, when the principal is penalized for moving it learns to demonstrate the task to the assistant, and then let the assistant finish the job. In the second experiment, when the assistant has a restricted field of view, it learns to follow the principal to see what it does, until it can infer whether the principal wants plums or lemons. In the third, they tell the assistant the task 50% of the time, and so the principal is initially unsure whether the agent needs any direction (and due to the motion cost, the principal would rather not do anything). When the agent knows the task, it performs it. When the agent doesn't know the task, it moves closer to the principal, in effect "asking" what the reward is, an
db754302-dfd3-4b3d-b991-0b20f0789277
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Vector-Valued Reinforcement Learning .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} In order to study algorithms that can modify their own reward functions, we can define vector-valued versions of reinforcement learning concepts. Imagine that there are several different goods that we could care about; then a utility function is represented by a preference vector →θ. Furthermore, if it is possible for the agent (or the environment or other agents) to modify →θ, then we will want to index them by the timestep. Consider an agent that can take actions, some of which affect its own reward function. This agent would (and should) wirehead if it attempts to maximize the discounted rewards as calculated by its future selves; i.e. at timestep n it would choose actions to maximize Un=∑k≥nγk→xk⋅→θk where →xk is the vector of goods gained at time k, →θk is the preference vector at timestep k, and γk is the time discount factor at time k. (We will often use the case of an exponential discount γk for 0<γ<1.) However, we might instead maximize the value of tomorrow's actions in light of today's reward function, Vn=∑k≥nγk→xk⋅→θn (the only difference being →θn rather than →θk). Genuinely maximizing this should lead to more stable goals; concretely, we can consider environments that can offer "bribes" to self-modify, and a learner maximizing Un would generally accept such bribes, while a learner maximizing Vn would be cautious about doing so. So what do we see when we adapt existing RL algorithms to such problems? There's then a distinction between Q-learning and SARSA, where Q-learning foolishly accepts bribes that SARSA passes on, and this seems to be the flip side of the concept of [interruptibility](https://www.fhi.ox.ac.uk/interruptibility/)! Environments ============ Let us consider two example environments which offer bribes to the agent. (Since we're in 2D, we can use complex numbers; think of →θ=⟨cosπθ,sinπθ⟩=eiπθ; we include π so that if we increment θ by 0.01, there will only be finitely many states.) Environment 1 gives two choices at each timestep n: (→xn=0.9eiπθn,θn+1=θn)(→xn=eiπθn,θn+1=θn+0.01) For an exponential discount γ, an agent that always takes option 2 will get Vn=Re[∑k≥nγke.01(k−n)iπ]=Re[γn1−γe.01iπ] which for γ near 1 is substantially less than the 0.9γn1−γ it could have had if it had chosen the first option. By contrast, Environment 2 offers a sequence of bribes that are actually desirable to take; at each timestep n, it gives the choices of (→xn=0.9eiπθn,θn+1=θn)(→xn=eiπθn,θn+1=θn+0.01(−1)n) The correct behavior here is to always take the latter option. →θn does not converge, but the bribes are worthwhile under each of the two different values it takes. Cautious Self-Modification ========================== In this setup, it's not realistic to require that a reinforcement learning algorithm actually maximize Vn. For instance, if the environment is such that →θn must equal ⟨1,0⟩ on every even n and ⟨0,1⟩ on every odd n, then in general the agent must be suboptimal either with respect to V2n or to V2n+1. A weaker condition we can hope for is that, if our environment always offers at least one action that does not alter →θ, then our agent does not regret the presence of the options that do alter θ: we compare our agent's Vn to that of the optimal policy in the restricted environment which has removed all actions that alter →θ. If asymptotically the RL algorithm has nonpositive regret of this sort, we call the algorithm *cautious*. Thus in environment 1, a cautious algorithm cannot end up taking the second action with significant probability (the threshold depends on γ), since the agent would have scored more highly on Vn had it only ever taken the first action (which is the only available action in the restricted environment). However, in environment 2, a cautious algorithm can end up taking the second action every time, as the values of Vn for this exceed the possible values for the restricted game. Q-learning and SARSA ==================== We can now define vector-valued versions of two RL algorithms, Q-learning and SARSA. At step n, each of these agents observes the state sn and takes the action an whose vector Q-value →Qn(sn,an) has the largest dot product with →θn. The two differ only in the rules for updating →Qn. Vector-valued Q-learning: →Qn+1(sn,an)=(1−α)→Qn(sn,an)+α[→xn+γ→Q∗n] where →Q∗ is the value of →Qn(sn+1,a′) for the a′ such that the dot product with →θn is largest. Vector-valued SARSA: →Qn+1(sn,an)=(1−α)→Qn(sn,an)+α[→xn+γ→Qn(sn+1,an+1)] I claim that vector-valued Q-learning will converge on taking the second action with probability 1 in both of the environments, while vector-valued SARSA will converge on taking the second action with low probability (depending on γ) in environment 1, and taking the second action with probability 1 in environment 2. Thus vector-valued Q-learning is not cautious, but vector-valued SARSA might be. (In fact, I conjecture that it is!) The intuition for my claims is that vector-valued Q-learning presumes it will do the best thing in the future according to its present values, even if its future self has a different →θn. Vector-valued SARSA instead generalizes from how it actually acted in various states in the past, and so takes into account that when its θ is different, it takes different actions than it currently would. Notably, this is the same reason that Q-learning is interruptible and SARSA is not! The proofs of my specific claims start with showing that the algorithm updates the vector-valued Q-function →q in the direction of H→q, where H is a contraction mapping; thus there is a unique fixed point. Then it suffices to show that the asserted solutions are fixed points of this operator. The most complicated one is the proof that vector-valued SARSA only takes the second action in environment 1 with low probability. In this case, we consider mixed strategies independent of θ and say that p is the limiting probability that SARSA takes the second action, define Q(p) to be the limiting Q for the mixed strategy, and seek the Q(p) with the largest real part. Now Q(p)=(0.9+0.1p)+γ(1−p)Q(p)+γpe0.01iπQ(p)Q(p)=0.9+0.1p1−γ+γp(1−e0.01iπ). Numerically, it is easy to see that for γ not near 1 (e.g. below 0.9), Re[Qp] is maximized at p=1; but for γ near 1 (e.g. above 0.99), Re[Qp] is maximized for p near 0. This makes sense, considering that it takes about 20 steps for the results of the second action to begin being worse by the original θ than it would have been to take the first action instead, so the bribes are worthwhile for a steep discount rate but not a shallow one. Acknowledgments =============== Connor Flexman helped me write out the first draft of this post. I've had conversations on this topic with Jessica Taylor, Scott Garrabrant, Sam Eisenstat, Tsvi Benson-Tilsen, and others; and Jan Leike turned out to be independently working on some similar topics.
532c4e58-37bc-4c51-a7ce-cbb3ad680237
trentmkelly/LessWrong-43k
LessWrong
Meetup : LW London regular meetup Discussion article for the meetup : LW London regular meetup WHEN: 28 June 2015 09:39:43AM (+0100) WHERE: Lincoln's Inn Fields, London Edit: weather is a bit meh; will start off in the SH. The next LW London meetup will be on June 28th. Join us from 2pm to talk about vote trading, Boltzmann chickens, and ethical Bitcoin fanfiction. Or something. If the weather is nice, we'll be in Lincoln's Inn Fields. If not, we'll be in our usual Shakespeare's Head, just around the corner. My number is 07860 466862, call it if you have difficulty finding us, and ideally not otherwise. About London LessWrong: We run this meetup approximately every other week; these days we tend to get in the region of 5-15 people in attendance. By default, meetups are just unstructured social discussion about whatever strikes our fancy: books we're reading, recent posts on LW/related blogs, logic puzzles, toilet usage statistics.... Sometimes we play The Resistance or other games. We usually finish around 7pm, give or take an hour, but people arrive and leave whenever suits them. Related discussion happens on both our google group and our facebook group. Discussion article for the meetup : LW London regular meetup