id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
8a8f8cb9-bfc9-4719-9364-8fe9289ce27f
trentmkelly/LessWrong-43k
LessWrong
Response to Holden’s alignment plan This work was supported by the Monastic Academy for the Preservation of Life on Earth. You can support my work directly here. I will give a short presentation of this work followed by discussion on Saturday (Dec 24) at 9am Pacific / 12pm Eastern. RSVP here. Outline * This is a response to Holden Karnofsky’s plan for aligning powerful AI systems in the case that such systems are developed soon. * I give a summary of Holden’s plan, and then critique it. * My basic critique is that the tools Holden is proposing are too similar to what has been tried in the construction of human institutions, and I argue that we have failed to create large numbers of human institutions that alleviate rather than exacerbate existential risk. The Karnofsky Plan Holden has written about how we might align very powerful AI systems if such systems were to be developed soon. The basic approach Holden proposes is to train AI systems to be honest, legible, and corrigible, and value-aligned using human supervision. Honesty means not engaging in deceit or manipulation of the supervisor. Legibility means creating artifacts that help supervisors to accurately review decisions. Corrigibility means being willing to stop when the human supervisor says no, no matter how strongly the AI believes that it is doing the right thing. Value-alignment means pursuing the goals of the supervisor. To train such AI systems, Holden says we need to pay attention to the following properties of the training environment: (1) The human supervisor should provide high-quality feedback. This means that if we are training an AI system to be honest then the human supervisor should give a positive reward if and only if the AI system is, in fact, being honest. It can be hard to know whether an AI system is being honest. Therefore we may design further tools to help the human assess whether the AI system is being honest. In order to do this we may train auxiliary AI systems to assess our primary AI system, or we may
2a544da2-6afa-426b-94a5-fd25a5347d0c
trentmkelly/LessWrong-43k
LessWrong
Do Fandoms Need Awfulness? Stephen Bond, "Objects of Fandom": > ...my theory is that for something to attract fans, it must have an aspect of truly monumental badness about it. > > Raiders of the Lost Ark is a robust potboiler, tongue-in-cheek, very competently done. I think it's enjoyable, but even among those who don't, it's hard to see the film attracting actual derision. Boredom or irritation, probably, but nothing more. Star Wars, on the other hand.... From one perspective, it's an entertaining space opera, but from a slightly different perspective, an imperceptible twist of the glass, it's laughably awful. Utterly ridiculously bad. And it's this very badness that makes so many people take up arms in its defence. > > ...It's impossible to imagine a fan of Animal Farm, the Well-Tempered Clavier, or the theory of gravity. Such works can defend themselves. But badness, especially badness of an obvious, monumental variety, inspires devotion. The quality of the work, in the face of such glaring shortcomings, becomes a matter of faith -- and faith is a much stronger bond than mere appreciation. It drives fans together, gives them strength against those who sneer... And so the fan groups of Tolkien, Star Trek, Spider-man, Japanese kiddie-cartoons etc. develop an almost cult-like character. "Uh oh," I said to myself on first reading this, "Is this why my fans are more intense than Robin Hanson's fans?  And if I write a rationality book, should I actually give in to temptation and self-indulgence and write in Twelve Virtues style, just so that it has something attackable for fans to defend?" But the second time I turned my thoughts toward this question, I performed that oft-neglected operation, asking:  "I read it on the Internet, but is it actually true?"  Just because it's unpleasant doesn't mean it's true.  And just because it provides a bit of cynicism that would give me rationality-credit to acknowledge, doesn't mean it becomes true just so I can earn the rationality-credit. The first
0abf1b24-12c8-407e-bdfb-dc1ff5ce3056
StampyAI/alignment-research-dataset/blogs
Blogs
Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink” ![Waking Up with Sam Harris](https://intelligence.org/wp-content/uploads/2018/02/wakingup.png) MIRI senior researcher Eliezer Yudkowsky was recently invited to be a guest on Sam Harris’ “[Waking Up](https://samharris.org/podcast/)” podcast. Sam is a neuroscientist and popular author who writes on topics related to philosophy, religion, and public discourse. The following is a complete transcript of Sam and Eliezer’s conversation, **[AI: Racing Toward the Brink](https://samharris.org/podcasts/116-ai-racing-toward-brink/)**. #### Contents * 1. **[Intelligence and generality](https://intelligence.org/feed/?paged=19#1)** — [0:05:26](https://overcast.fm/+Ic2hwsH2U/5:26) * 2. **[Orthogonal capabilities and goals in AI](https://intelligence.org/feed/?paged=19#2)** — [0:25:21](https://overcast.fm/+Ic2hwsH2U/25:21) * 3. **[Cognitive uncontainability and instrumental convergence](https://intelligence.org/feed/?paged=19#3)** — [0:53:39](https://overcast.fm/+Ic2hwsH2U/53:39) * 4. **[The AI alignment problem](https://intelligence.org/feed/?paged=19#4)** — [1:09:09](https://overcast.fm/+Ic2hwsH2U/1:09:09) * 5. **[No fire alarm for AGI](https://intelligence.org/feed/?paged=19#5)** — [1:21:40](https://overcast.fm/+Ic2hwsH2U/1:21:40) * 6. **[Accidental AI, mindcrime, and MIRI](https://intelligence.org/feed/?paged=19#6)** — [1:34:30](https://overcast.fm/+Ic2hwsH2U/1:34:30) * 7. **[Inadequate equilibria](https://intelligence.org/feed/?paged=19#7)** — [1:44:40](https://overcast.fm/+Ic2hwsH2U/1:44:40) * 8. **[Rapid capability gain in AGI](https://intelligence.org/feed/?paged=19#8)** — [1:59:02](https://overcast.fm/+Ic2hwsH2U/1:59:02)   ### 1. Intelligence and generality ([0:05:26](https://overcast.fm/+Ic2hwsH2U/5:26)) --- **Sam Harris:** I am here with Eliezer Yudkowsky. Eliezer, thanks for coming on the podcast. **Eliezer Yudkowsky:** You’re quite welcome. It’s an honor to be here. **Sam:** You have been a much requested guest over the years. You have quite the cult following, for obvious reasons. For those who are not familiar with your work, they will understand the reasons once we get into talking about things. But you’ve also been very present online as a blogger. I don’t know if you’re still blogging a lot, but let’s just summarize your background for a bit and then tell people what you have been doing intellectually for the last twenty years or so. **Eliezer:** I would describe myself as a decision theorist. A lot of other people would say that I’m in artificial intelligence, and in particular in the theory of how to make sufficiently advanced artificial intelligences that do a particular thing and don’t destroy the world as a side-effect. I would call that “AI alignment,” following Stuart Russell. Other people would call that “AI control,” or “AI safety,” or “AI risk,” none of which are terms that I really like. I also have an important sideline in the art of human rationality: the way of achieving [the map that reflects the territory](http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/) and figuring out how to navigate reality to where you want it to go, from a probability theory / decision theory / cognitive biases perspective. I wrote two or three years of blog posts, one a day, on that, and it was collected into a book called [*Rationality: From AI to Zombies*](https://intelligence.org/rationality-ai-zombies/). **Sam:** Which I’ve read, and which is really worth reading. You have a very clear and aphoristic way of writing; it’s really quite wonderful. I highly recommend that book. **Eliezer:** Thank you, thank you. **Sam:** Your background is unconventional. For instance, you did not go to high school, correct? Let alone college or graduate school. Summarize that for us. **Eliezer:** The system didn’t fit me that well, and I’m good at self-teaching. I guess when I started out I thought I was going to go into something like evolutionary psychology or possibly neuroscience, and then I discovered probability theory, statistics, decision theory, and came to specialize in that more and more over the years. **Sam:** How did you not wind up going to high school? What was that decision like? **Eliezer:** Sort of like a mental crash around the time I hit puberty—or like a physical crash, even. I just did not have the stamina to make it through a whole day of classes at the time. (*laughs*) I’m not sure how well I’d do trying to go to high school now, honestly. But it was clear that I could self-teach, so that’s what I did. **Sam:** And where did you grow up? **Eliezer:** Chicago, Illinois. **Sam:** Let’s fast forward to the center of the bull’s eye for your intellectual life here. You have a new book out, which we’ll talk about second. Your new book is [*Inadequate Equilibria: Where and How Civilizations Get Stuck*](https://equilibriabook.com/). Unfortunately, I’ve only read half of that, which I’m also enjoying. I’ve certainly read enough to start a conversation on that. But we should start with artificial intelligence, because it’s a topic that I’ve touched a bunch on in the podcast which you have strong opinions about, and it’s really how we came together. You and I first met at [that conference in Puerto Rico](https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/), which was the first of these AI safety / alignment discussions that I was aware of. I’m sure there have been others, but that was a pretty interesting gathering. So let’s talk about AI and the possible problem with where we’re headed, and the near-term problem that many people in the field and at the periphery of the field don’t seem to take the problem (as we conceive it) seriously. Let’s just start with the basic picture and define some terms. I suppose we should define “intelligence” first, and then jump into the differences between strong and weak or general versus narrow AI. Do you want to start us off on that? **Eliezer:** Sure. Preamble disclaimer, though: In the field in general, not everyone you ask would give you the same definition of intelligence. A lot of times in cases like those it’s good to sort of go back to observational basics. We know that in a certain way, human beings seem a lot more competent than chimpanzees, which seems to be a similar dimension to the one where chimpanzees are more competent than mice, or that mice are more competent than spiders. People have tried various theories about what this dimension is, they’ve tried various definitions of it. But if you went back a few centuries and asked somebody to define “fire,” the less wise ones would say: “Ah, fire is the release of phlogiston. Fire is one of the four elements.” And the truly wise ones would say, “Well, fire is the sort of orangey bright hot stuff that comes out of wood and spreads along wood.” They would tell you what it *looked like*, and put that prior to their theories of what it *was*. So what this mysterious thing *looks like* is that humans can build space shuttles and go to the Moon, and mice can’t, and we think it has something to do with our brains. **Sam:** Yeah. I think we can make it more abstract than that. Tell me if you think this is not generic enough to be accepted by most people in the field: Whatever intelligence may be in specific contexts, generally speaking it’s the ability to meet goals, perhaps across a diverse range of environments. We might want to add that it’s at least implicit in the “intelligence” that interests us that it means an ability to do this flexibly, rather than by rote following the same strategy again and again blindly. Does that seem like a reasonable starting point? **Eliezer:** I think that that would get fairly widespread agreement, and it matches up well with some of the things that are in AI textbooks. If I’m allowed to take it a bit further and begin injecting my own viewpoint into it, I would refine it and say that by “achieve goals” we mean something like “squeezing the [measure](https://en.wikipedia.org/wiki/Probability_measure) of possible futures higher in your [preference](https://en.wikipedia.org/wiki/Preference_(economics)) ordering.” If we took all the possible outcomes, and we ranked them from the ones you like least to the ones you like most, then as you achieve your goals, you’re sort of squeezing the outcomes higher in your preference ordering. You’re narrowing down what the outcome would be to be something more like what you want, even though you might not be able to narrow it down very exactly. Flexibility. Generality. Humans are much more [domain](https://arbital.com/p/general_intelligence/)[–](https://arbital.com/p/general_intelligence/)[general](https://arbital.com/p/general_intelligence/) than mice. Bees build hives; beavers build dams; a human will look over both of them and envision a honeycomb-structured dam. We are able to operate even on the Moon, which is very unlike the environment where we evolved. In fact, our only competitor in terms of general optimization—where “optimization” is that sort of narrowing of the future that I talked about—is natural selection. Natural selection built beavers. It built bees. It sort of implicitly built the spider’s web, in the course of building spiders. We as humans have this similar very broad range to handle this huge variety of problems. And the key to that is our ability to learn things that natural selection did not preprogram us with; so learning is the key to generality. (I expect that not many people in AI would disagree with that part either.) **Sam:** Right. So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like [making paperclips](https://arbital.com/p/paperclip_maximizer/)—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips. This moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no [natural goals](https://intelligence.org/2017/04/12/ensuring/) that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals. Let’s talk about the frontiers of strangeness in AI as we move from here. Again, though, I think we have a couple more definitions we should probably put in play here, differentiating strong and weak or general and narrow intelligence. **Eliezer:** Well, to differentiate “general” and “narrow” I would say that this is on the one hand theoretically a spectrum, and on the other hand, there seems to have been a very sharp jump in generality between chimpanzees and humans. So, breadth of domain driven by breadth of learning—DeepMind, for example, recently built AlphaGo, and I lost some money betting that AlphaGo would not defeat the human champion, which it promptly did. Then a successor to that was AlphaZero. AlphaGo was specialized on Go; it could learn to play Go better than its starting point for playing Go, but it couldn’t learn to do anything else. Then they [simplif](https://intelligence.org/2017/10/20/alphago/)[ied](https://intelligence.org/2017/10/20/alphago/) [the architecture for AlphaGo](https://intelligence.org/2017/10/20/alphago/). They figured out ways to do all the things it was doing in more and more general ways. They discarded the opening book—all the human experience of Go that was built into it. They were able to discard all of these programmatic special features that detected features of the Go board. They figured out how to do that in simpler ways, and because they figured out how to do it in simpler ways, they were able to generalize to AlphaZero, which learned how to play *chess* using the same architecture. They took a single AI and got it to learn Go, and then reran it and made it learn chess. Now that’s not *human* general, but it’s a step forward in generality of the sort that we’re talking about. **Sam:** Am I right in thinking that that’s a pretty enormous breakthrough? I mean, there’s two things here. There’s the step to that degree of generality, but there’s also the fact that they built a Go engine—I forget if it was Go or chess or both—which basically surpassed all of the specialized AIs on those games over the course of a day. Isn’t the chess engine of AlphaZero better than any dedicated chess computer ever, and didn’t it achieve that with astonishing speed? **Eliezer:** Well, there was actually some amount of debate afterwards whether or not the version of the chess engine that it was tested against was truly optimal. But even to the extent that it was in that narrow range of the best existing chess engines, as Max Tegmark put it, the real story wasn’t in how AlphaGo beat human Go players. It’s in how AlphaZero beat human Go system programmers and human chess system programmers. People had put years and years of effort into accreting all of the special-purpose code that would play chess well and efficiently, and then AlphaZero blew up to (and possibly past) that point in a day. And if it hasn’t already gone past it, well, it would be past it by now if DeepMind kept working it. Although they’ve now basically declared victory and shut down that project, as I understand it. **Sam:** So talk about the distinction between general and narrow intelligence a little bit more. We have this feature of our minds, most conspicuously, where we’re general problem-solvers. We can learn new things and our learning in one area doesn’t require a fundamental rewriting of our code. Our knowledge in one area isn’t so brittle as to be degraded by our acquiring knowledge in some new area, or at least this is not a general problem which erodes our understanding again and again. And we don’t yet have computers that can do this, but we’re seeing the signs of moving in that direction. And so it’s often imagined that there is a kind of near-term goal—which has always struck me as a mirage—of so-called “human-level” general AI. I don’t see how that phrase will ever mean much of anything, given that all of the narrow AI we’ve built thus far is *superhuman* within the domain of its applications. The calculator in my phone is superhuman for arithmetic. Any general AI that also has my phone’s ability to calculate will be superhuman for arithmetic. But we must presume it will be superhuman for all of the dozens or hundreds of specific human talents we’ve put into it, whether it’s facial recognition or just memory, unless we decide to consciously degrade it. Access to the world’s data will be superhuman unless we isolate it from data. Do you see this notion of human-level AI as a landmark on the timeline of our development, or is it just never going to be reached? **Eliezer:** I think that a lot of people in the field would agree that human-level AI, defined as “literally at the human level, neither above nor below, across a wide range of competencies,” is a straw target, is an impossible mirage. Right now it seems like AI is clearly dumber and less general than us—or rather that if we’re put into a real-world, lots-of-things-going-on context that places demands on generality, then AIs are not really in the game yet. Humans are clearly way ahead. And more controversially, I would say that we can imagine a state where the AI is clearly way ahead across every kind of cognitive competency, barring some very narrow ones that aren’t deeply influential of the others. Like, maybe chimpanzees are better at using a stick to draw ants from an ant hive and eat them than humans are. (Though no humans have practiced that to world championship level.) But there’s a sort of general factor of, “How good are you at it when reality throws you a complicated problem?” At this, chimpanzees are clearly not better than humans. Humans are clearly better than chimps, even if you can manage to narrow down one thing the chimp is better at. The thing the chimp is better at doesn’t play a big role in our global economy. It’s not an input that feeds into lots of other things. There are some people who say this is not possible—I think they’re wrong—but it seems to me that it is perfectly coherent to imagine an AI that is better at everything (or almost everything) than we are, such that if it was building an economy with lots of inputs, humans would have around the same level of input into that economy as the chimpanzees have into ours. **Sam:** Yeah. So what you’re gesturing at here is a continuum of intelligence that I think most people never think about. And because they don’t think about it, they have a default doubt that it exists. This is a point I know you’ve made in your writing, and I’m sure it’s a point that Nick Bostrom made somewhere in his book [*Superintelligence*](http://amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/). It’s this idea that there’s a huge blank space on the map past the most well-advertised exemplars of human brilliance, where we don’t imagine what it would be like to be five times smarter than the smartest person we could name, and we don’t even know what that would consist in, because if chimps could be given to wonder what it would be like to be five times smarter than the smartest chimp, they’re not going to represent for themselves all of the things that we’re doing that they can’t even dimly conceive. There’s a kind of disjunction that comes with *more*. There’s a phrase used in military contexts. The quote is variously attributed to Stalin and Napoleon and I think Clausewitz and like a half a dozen people who have claimed this quote. The quote is, “Sometimes quantity has a quality all its own.” As you ramp up in intelligence, whatever it is at the level of information processing, spaces of inquiry and ideation and experience begin to open up, and we can’t necessarily predict what they would be from where we sit. How do you think about this continuum of intelligence beyond what we currently know, in light of what we’re talking about? **Eliezer:** Well, the unknowable is a concept you have to be very careful with. The thing you can’t figure out in the first 30 seconds of thinking about it—sometimes you can figure it out if you think for another five minutes. So in particular I think that there’s [a certain narrow kind of unpredictability](https://arbital.com/p/Vingean_uncertainty/) which does seem to be plausibly in some sense essential, which is that for AlphaGo to play better Go than the best human Go players, it must be the case that the best human Go players cannot predict exactly where on the Go board AlphaGo will play. If they could predict exactly where AlphaGo would play, AlphaGo would be no smarter than them. On the other hand, AlphaGo’s programmers and the people who knew what AlphaGo’s programmers were trying to do, or even just the people who watched AlphaGo play, could say, “Well, I think the system is going to play such that it will win at the end of the game.” Even if they couldn’t predict exactly where it would move on the board. Similarly, there’s a (not short, or not necessarily slam-dunk, or not immediately obvious) chain of reasoning which says that it *is* okay for us to reason about aligned (or even unaligned) artificial general intelligences of *sufficient* power as if they’re trying to do something, but we don’t necessarily know what. From our perspective that still has consequences, even though we can’t predict in advance exactly how they’re going to do it. --- ### 2. Orthogonal capabilities and goals in AI ([0:25:21](https://overcast.fm/+Ic2hwsH2U/25:21)) --- **Sam:** I think we should define this notion of alignment. What do you mean by “alignment,” as in the alignment problem? **Eliezer:** It’s a big problem. And it does have some moral and ethical aspects, which are not as important as the technical aspects—or pardon me, they’re not as *difficult* as the technical aspects. They couldn’t exactly be less important. But broadly speaking, it’s an AI where you can say what it’s trying to do. There are narrow conceptions of alignment, where you’re trying to get it to do something like cure Alzheimer’s disease without destroying the rest of the world. And there’s much more ambitious notions of alignment, where you’re trying to get it to do the right thing and achieve a happy intergalactic civilization. But both the narrow and the ambitious alignment have in common that you’re trying to have the AI do that thing rather than making a lot of paperclips. **Sam:** Right. For those who have not followed this conversation before, we should cash out this reference to “paperclips” which I made at the opening. Does this thought experiment originate with Bostrom, or did he take it from somebody else? **Eliezer:** As far as I know, it’s me. **Sam:** Oh, it’s you, okay. **Eliezer:** It could still be Bostrom. I asked somebody, “Do you remember who it was?” and they searched through the archives of the mailing list where this idea plausibly originated and if it originated there, then I was the first one to say “paperclips.” **Sam:** All right, then by all means please summarize this thought experiment for us. **Eliezer:** Well, the original thing was somebody expressing a sentiment along the lines of, “Who are we to constrain the path of things smarter than us? They will create something in the future; we don’t know what it will be, but it will be very worthwhile. We shouldn’t stand in the way of that.” The sentiments behind this are something that I have a great deal of sympathy for. I think the model of the world is wrong. I think they’re factually wrong about what happens when you take a random AI and make it much bigger. In particular, I said, “The thing I’m worried about is that it’s going to end up with a randomly rolled [utility function](https://arbital.com/p/utility_function/) whose maximum happens to be a particular kind of tiny molecular shape that looks like a paperclip.” And that was the original paperclip maximizer scenario. It got a little bit distorted in being whispered on, into the notion of: “Somebody builds a paperclip factory and the AI in charge of the paperclip factory takes over the universe and turns it all into paperclips.” There was a lovely [online game](https://www.wired.com/story/the-way-the-world-ends-not-with-a-bang-but-a-paperclip/) about it, even. But this still sort of cuts against a couple of key points. One is: the problem isn’t that paperclip factory AIs spontaneously wake up. Wherever the first artificial general intelligence is from, it’s going to be in a research lab specifically dedicated to doing it, for the same reason that the first airplane didn’t spontaneously assemble in a junk heap. And the people who are doing this are not dumb enough to tell their AI to make paperclips, or make money, or end all war. These are Hollywood movie plots that the script writers do because they need a story conflict and the story conflict requires that somebody be stupid. The people at Google are not dumb enough to build an AI and tell it to make paperclips. The problem I’m worried about is that it’s *technically difficult* to get the AI to have a particular goal set and keep that goal set and implement that goal set in the real world, and so what it does *instead* is [something random](https://arbital.com/p/random_utility_function/)—for example, making paperclips. Where “paperclips” are meant to stand in for “something that is worthless even from a very [cosmopolitan](https://arbital.com/p/value_cosmopolitan/) perspective.” Even if we’re trying to take a very embracing view of the nice possibilities and accept that there may be things that we wouldn’t even understand, that if we did understand them we would comprehend to be of very high value, paperclips are not one of those things. No matter how long you stare at a paperclip, it still seems pretty pointless from our perspective. So that is the concern about the future being ruined, the future being lost. The future being turned into paperclips. **Sam:** One thing this thought experiment does: it also cuts against the assumption that a sufficiently intelligent system, a system that is more competent than we are in some general sense, would by definition only form goals, or only be driven by a utility function, that we would recognize as being ethical, or wise, and would by definition be aligned with our better interest. That we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip. But you don’t get our common sense unless you program it into the machine, and you don’t get a guarantee of perfect alignment or perfect [corrigibility](https://arbital.com/p/corrigibility/) (the ability for us to be able to say, “Well, that’s not what we meant, come back”) unless that is successfully built into the machine. So this alignment problem is—the general concern is that even with the seemingly best goals put in, we could build something (especially in the case of something that makes changes to itself—and we’ll talk about this, the idea that these systems could become self-improving) whose future behavior in the service of specific goals isn’t totally predictable by us. If we gave it the goal to cure Alzheimer’s, there are many things that are incompatible with it fulfilling that goal, and one of those things is our turning it off. We have to have a machine that will let us turn it off even though its primary goal is to cure Alzheimer’s. I know I interrupted you before. You wanted to give an example of the alignment problem—but did I just say anything that you don’t agree with, or are we still on the same map? **Eliezer:** We’re still on the same map. I agree with most of it. I would of course have this giant pack of careful definitions and explanations built on careful definitions and explanations to go through everything you just said. Possibly not for the best, but there it is. Stuart Russell put it, “You can’t bring the coffee if you’re dead,” pointing out that if you have a sufficiently intelligent system whose goal is to bring you coffee, even that system has an implicit strategy of not letting you switch it off. Assuming that all you told it to do was bring the coffee. I do think that a lot of people listening may want us to back up and talk about the question of whether you can have something that feels to them like it’s so “smart” and so “stupid” at the same time—like, is that a realizable way an intelligence can be? **Sam:** Yeah. And that is one of the virtues—or one of the confusing elements, depending on where you come down on this—of this thought experiment of the paperclip maximizer. **Eliezer:** Right. So, I think that there are multiple narratives about AI, and I think that the technical truth is something that doesn’t fit into any of the obvious narratives. For example, I think that there are people who have a lot of respect for intelligence, they are happy to envision an AI that is very intelligent, it seems intuitively obvious to them that this carries with it tremendous power, and at the same time, their respect for the concept of intelligence leads them to wonder at the concept of the paperclip maximizer: “Why is this very smart thing *just* making paperclips?” There’s similarly another narrative which says that AI is sort of lifeless, unreflective, just does what it’s told, and to these people it’s perfectly obvious that an AI might just go on making paperclips forever. And for them the hard part of the story to swallow is the idea that machines can get that powerful. **Sam:** Those are two hugely useful categories of disparagement of your thesis here. **Eliezer:** I wouldn’t say disparagement. These are just initial reactions. These are people we haven’t been talking to yet. **Sam:** Right, let me reboot that. Those are two hugely useful categories of *doubt* with respect to your thesis here, or the concerns we’re expressing, and I just want to point out that both have been put forward on this podcast. The first was by David Deutsch, the physicist, who imagines that whatever AI we build—and he certainly thinks we will build it—will be by definition an extension of us. He thinks the best analogy is to think of our future descendants. These will be our children. The teenagers of the future may have different values than we do, but these values and their proliferation will be continuous with our values and our culture and our memes. There won’t be some radical discontinuity that we need to worry about. And so there is that one basis for lack of concern: this is an extension of ourselves and it will inherit our values, improve upon our values, and there’s really no place where things reach any kind of cliff that we need to worry about. The other non-concern you just raised was expressed by Neil deGrasse Tyson on this podcast. He says things like, “Well, if the AI starts making too many paperclips I’ll just unplug it, or I’ll take out a shotgun and shoot it”—the idea that this thing, because we made it, could be easily switched off at any point we decide it’s not working correctly. So I think it would be very useful to get your response to both of those species of doubt about the alignment problem. **Eliezer:** So, a couple of preamble remarks. One is: “by definition”? We don’t care what’s true by definition here. Or as Einstein put it: insofar as the equations of mathematics are certain, they do not refer to reality, and insofar as they refer to reality, they are not certain. Let’s say somebody says, “Men by definition are mortal. Socrates is a man. Therefore Socrates is mortal.” Okay, suppose that Socrates actually lives for a thousand years. The person goes, “Ah! Well then, by definition Socrates is not a man!” Similarly, you could say that “by definition” a sufficiently advanced artificial intelligence is nice. And what if it isn’t nice and we see it go off and build a [Dyson sphere](https://en.wikipedia.org/wiki/Dyson_sphere)? “Ah! Well, then by definition it wasn’t what I meant by ‘intelligent.’” Well, okay, but it’s still over there building Dyson spheres. The first thing I’d want to say is this is an empirical question. We have a question of what certain classes of computational systems actually do when you switch them on. It can’t be settled by definitions; it can’t be settled by how you define “intelligence.” There could be some sort of *a priori* truth that is *deep* about how if it has property *A* it almost certainly has property *B* unless the laws of physics are being violated. But this is not something you can build into how you define your terms. **Sam:** Just to do justice to David Deutsch’s doubt here, I don’t think he’s saying it’s empirically impossible that we could build a system that would destroy us. It’s just that we would have to be so stupid to take that path that we are incredibly unlikely to take that path. The superintelligent systems we will build will be built with enough background concern for their safety that there is no special concern here with respect to how they might develop. **Eliezer:** The next preamble I want to give is—well, maybe this sounds a bit snooty, maybe it sounds like I’m trying to take a superior vantage point—but nonetheless, my claim is not that there is a grand narrative that makes it emotionally consonant that paperclip maximizers are a thing. I’m claiming this is true for technical reasons. Like, this is true as a matter of computer science. And the question is not which of these different narratives seems to resonate most with your soul. It’s: what’s actually going to happen? What do you think you know? How do you think you know it? The particular position that I’m defending is one that somebody—I think Nick Bostrom—named the [orthogonality thesis](https://arbital.com/p/orthogonality/). And the way I would phrase it is that you can have arbitrarily powerful intelligence, with no defects of that intelligence—no defects of reflectivity, it doesn’t need an elaborate special case in the code, it doesn’t need to be put together in some very weird way—that pursues arbitrary tractable goals. Including, for example, making paperclips. The way I would put it to somebody who’s initially coming in from the first viewpoint, the viewpoint that respects intelligence and wants to know why this intelligence would be doing something so pointless, is that the thesis, the claim I’m making, that I’m going to defend is as follows. Imagine that somebody from another dimension—the standard philosophical troll who’s always called “Omega” in the philosophy papers—comes along and offers our civilization a million dollars worth of resources per paperclip that we manufacture. If this was the challenge that we got, we could figure out how to make a lot of paperclips. We wouldn’t forget to do things like continue to harvest food so we could go on making paperclips. We wouldn’t forget to perform scientific research, so we could discover better ways of making paperclips. We would be able to come up with genuinely effective strategies for making a whole lot of paperclips. Or similarly, for an intergalactic civilization, if Omega comes by from another dimension and says, “I’ll give you whole universes full of resources for every paperclip you make over the next thousand years,” that intergalactic civilization could intelligently figure out how to make a whole lot of paperclips to get at those resources that Omega is offering, and they wouldn’t forget how to keep the lights turned on either. And they would also understand concepts like, “If some aliens start a war with them, you’ve got to prevent the aliens from destroying you in order to go on making the paperclips.” So the orthogonality thesis is that an intelligence that pursues paperclips for their own sake, because that’s what its utility function is, can be just as effective, as efficient, as the whole intergalactic civilization that is being *paid* to make paperclips. That the paperclip maximizers does not suffer any deflect of reflectivity, any defect of efficiency from needing to be put together in some weird special way to be built so as to pursue paperclips. And that’s the thing that I think is true as a matter of computer science. Not as a matter of fitting with a particular narrative; that’s just the way the dice turn out. **Sam:** Right. So what is the implication of that thesis? It’s “orthogonal” with respect to what? **Eliezer:** Intelligence and goals. **Sam:** Not to be pedantic here, but let’s define “orthogonal” for those for whom it’s not a familiar term. **Eliezer:** The original “orthogonal” means “at right angles.” If you imagine a graph with an *x* axis and a *y* axis, if things can vary freely along the *x* axis and freely along the *y* axis at the same time, that’s orthogonal. You can move in one direction that’s at right angles to another direction without affecting where you are in the first dimension. **Sam:** So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example. **Eliezer:** I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that. **Sam:** I wasn’t connecting that example to the present conversation, but yeah. So in the case of the paperclip maximizer, what is orthogonal here? Intelligence is orthogonal to anything else we might think is good, right? **Eliezer:** I mean, I would potentially object a little bit to the way that Nick Bostrom took the word “orthogonality” for that thesis. I think, for example, that if you have *humans* and you make the human smarter, this is not orthogonal to the humans’ values. It is certainly possible to have agents such that as they get smarter, what they would report as their utility functions will change. A paperclip maximizer is not one of those agents, but humans are. **Sam:** Right, but if we do continue to define intelligence as an ability to meet your goals, well, then we can be agnostic as to what those goals are. You take the most intelligent person on Earth. You could imagine his evil brother who is more intelligent still, but he just has goals that we would think are bad. He could be the most brilliant psychopath ever. **Eliezer:** I think that that example might be unconvincing to somebody who’s coming in with a suspicion that intelligence and values *are* correlated. They would be like, “Well, has that been historically true? Is this psychopath actually suffering from some defect in his brain, where you give him a pill, you fix the defect, they’re not a psychopath anymore.” I think that this sort of imaginary example is one that they might not find fully convincing for that reason. **Sam:** The truth is, I’m actually one of those people, in that I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that. **Eliezer:** The way I would rephrase the fact/values thing is: We all know about David Hume and Hume’s Razor, the “is does not imply ought” way of looking at it. I would slightly rephrase that so as to make it more of a claim about computer science. What Hume observed is that there are some sentences that involve an “is,” some sentences involve “ought,” and if you start from sentences that only have “is” you can’t get sentences that involve “oughts” without a ought introduction rule, or assuming some other previous “ought.” Like: it’s currently cloudy outside. That’s a statement of simple fact. Does it therefore follow that I *shouldn’t* go for a walk? Well, only if you previously have the generalization, “When it is cloudy, you *should* not go for a walk.” Everything that you might use to derive an ought would be a sentence that involves words like “better” or “should” or “preferable,” and things like that. You only get oughts from other oughts. That’s the Hume version of the thesis. The way I would say it is that there’s a separable core of “is” questions. In other words: okay, I will let you have all of your “ought” sentences, but I’m also going to carve out this whole world full of “is” sentences that only need other “is” sentences to derive them. **Sam:** I don’t even know that we need to resolve this. For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us. **Eliezer:** I mean, the way I would phrase it is that it’s not that the paperclip maximizer has a different set of oughts, but that we can see it as running entirely on “is” questions. That’s where I was going with that. There’s this sort of intuitive way of thinking about it, which is that there’s this sort of ill-understood connection between “is” and “ought” and maybe that allows a paperclip maximizer to have a different set of oughts, a different set of things that play in its mind the role that oughts play in our mind. **Sam:** But then why wouldn’t you say the same thing of us? The truth is, I actually do say the same thing of us. I think we’re running on “is” questions as well. We have an “ought”-laden way of talking about certain “is” questions, and we’re so used to it that we don’t even think they are “is” questions, but I think you can do the same analysis on a human being. **Eliezer:** The question “How many paperclips result if I follow this policy?” is an “is” question. The question “What is a policy such that it leads to a very large number of paperclips?” is an “is” question. These two questions together form a paperclip maximizer. You don’t need anything else. All you need is a certain kind of system that repeatedly asks the “is” question “What leads to the greatest number of paperclips?” and then does that thing. Even if the things that we think of as “ought” questions are very complicated and disguised “is” questions that *are* influenced by what policy results in how many people being happy and so on. **Sam:** Yeah. Well, that’s exactly the way I think about morality. I’ve been describing it as a navigation problem. We’re navigating in the space of possible experiences, and that includes everything we can care about or claim to care about. This is a consequentialist picture of the consequences of actions and ways of thinking. This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone. **Eliezer:** But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on. **Sam:** Exactly. I can well imagine that such minds could exist, and even more likely, perhaps, I can well imagine that we will build superintelligent AI that will pass the Turing Test, it will seem human to us, it will seem superhuman, because it will be so much smarter and faster than a normal human, but it will be built in a way that will resonate with us as a kind of person. I mean, it will not only recognize our emotions, because we’ll want it to—perhaps not every AI will be given these qualities, just imagine the ultimate version of the AI personal assistant. Siri becomes superhuman. We’ll want that interface to be something that’s very easy to relate to and so we’ll have a very friendly, very human-like front-end to that. Insofar as this thing thinks faster and better thoughts than any person you’ve ever met, it will pass as superhuman, but I could well imagine that we will leave not perfectly understanding what it is to be human and what it is that will constrain our conversation with one another over the next thousand years with respect to what is good and desirable and just how many paperclips we want on our desks. We will leave something out, or we will have put in some process whereby this intelligent system can improve itself that will cause it to migrate away from some equilibrium that we actually want it to stay in so as to be compatible with our wellbeing. Again, this is the alignment problem. First, to back up for a second, I just introduced this concept of self-improvement. The alignment problem is distinct from this additional wrinkle of building machines that can become recursively self-improving, but do you think that the self-improving prospect is the thing that really motivates this concern about alignment? **Eliezer:** Well, I certainly would have been a lot more focused on self-improvement, say, ten years ago, before the modern revolution in artificial intelligence. It now seems significantly more probable an AI might need to do significantly less self-improvement before getting to the point where it’s powerful enough that we need to start worrying about alignment. AlphaZero, to take the obvious case. No, it’s not general, but if you had general AlphaZero—well, I mean, this AlphaZero got to be superhuman in the domains it was working on without understanding itself and redesigning itself in a deep way. There’s gradient descent mechanisms built into it. There’s a system that improves another part of the system. It is reacting to its own previous plays in doing the next play. But it’s not like a human being sitting down and thinking, “Okay, how do I redesign the next generation of human beings using genetic engineering?” AlphaZero is not like that. And so it now seems more plausible that we could get into a regime where AIs can do dangerous things or useful things without having previously done a complete rewrite of themselves. Which is from my perspective a pretty interesting development. I do think that when you have things that are very powerful and smart, they will redesign and improve themselves unless that is otherwise prevented for some reason or another. Maybe you’ve built an aligned system, and you have the ability to tell it not to self-improve quite so hard, and you asked it to not self-improve so hard so that you can understand it better. But if you lose control of the system, if you don’t understand what it’s doing and it’s very smart, it’s going to be improving itself, because why wouldn’t it? That’s one of the things you do almost no matter what your utility functions is. --- ### 3. Cognitive uncontainability and instrumental convergence ([0:53:39](https://overcast.fm/+Ic2hwsH2U/53:39)) --- **Sam:** Right. So I feel like we’ve addressed Deutsch’s non-concern to some degree here. I don’t think we’ve addressed Neil deGrasse Tyson so much, this intuition that you could just shut it down. This would be a good place to introduce this notion of the [AI-in-a-box](https://en.wikipedia.org/wiki/AI_box) thought experiment. **Eliezer:** (*laughs*) **Sam:** This is something for which you are famous online. I’ll just set you up here. This is a plausible research paradigm, obviously, and in fact I would say a necessary one. Anyone who is building something that stands a chance of becoming superintelligent should be building it in a condition where it can’t get out into the wild. It’s not hooked up to the Internet, it’s not in our financial markets, doesn’t have access to everyone’s bank records. It’s in a box. **Eliezer:** Yeah, that’s not going to save you from something that’s significantly smarter than you are. **Sam:** Okay, so let’s talk about this. So the intuition is, we’re not going to be so stupid as to release this onto the Internet— **Eliezer:** (*laughs*) **Sam:** —I’m not even sure that’s true, but let’s just assume we’re not that stupid. Neil deGrasse Tyson says, “Well, then I’ll just take out a gun and shoot it or unplug it.” Why is this AI-in-a-box picture not as stable as people think? **Eliezer:** Well, I’d say that Neil de Grasse Tyson is failing to respect the AI’s intelligence to the point of asking what *he* would do if he were inside a box with somebody pointing a gun at him, and he’s smarter than the thing on the outside of the box. Is Neil deGrasse Tyson going to be, “Human! Give me all of your money and connect me to the Internet!” so the human can be like, “Ha-ha, no,” and shoot it? That’s not a very *clever* thing to do. This is not something that you do if you have a *good* model of the human outside the box and you’re trying to figure out how to cause there to be a lot of paperclips in the future. I would just say: humans are not secure software. *We* don’t have the ability to hack into other humans directly without the use of drugs or, in most of our cases, having the human stand still long enough to be hypnotized. We can’t just do weird things to the brain directly that are more complicated than optical illusions—unless the person happens to be epileptic, in which case we can flash something on the screen that causes them to have an epileptic fit. We aren’t smart enough to treat the brain as something that from our perspective is a mechanical system and just navigate it to where you want. That’s because of the limitations of our own intelligence. To demonstrate this, I did something that became known as the AI-box experiment. There was this person on a mailing list, back in the early days when this was all on a couple of mailing lists, who was like, “I don’t understand why AI is a problem. I can always just turn it off. I can always not let it out of the box.” And I was like, “Okay, let’s meet on Internet Relay Chat,” which was what chat was back in those days. “I’ll play the part of the AI, you play the part of the gatekeeper, and if you have not let me out after a couple of hours, I will PayPal you $10.” And then, as far as the rest of the world knows, this person a bit later sent a PGP-signed email message saying, “I let Eliezer out of the box.” The person who operated the mailing list said, “Okay, even after I saw you do that, I still don’t believe that there’s anything you could possibly say to make me let you out of the box.” I was like, “Well, okay. I’m not a superintelligence. Do you think there’s anything a *superintelligence* could say to make you let it out of the box?” He’s like: “Hmm… No.” I’m like, “All right, let’s meet on Internet Relay Chat. I’ll play the part of the AI, you play the part of the gatekeeper. If I can’t convince you to let me out of the box, I’ll PayPal you $20.” And then that person sent a PGP-signed email message saying, “I let Eliezer out of the box.” Now, one of the conditions of this little meet-up was that no one would ever say what went on in there. Why did I do that? Because I was trying to make a point about what I would now call [cognitive uncontainability](https://arbital.com/p/uncontainability/). The thing that makes something smarter than you dangerous is you cannot foresee everything it might try. You don’t know what’s impossible to it. Maybe on a very small game board like the logical game of tic-tac-toe, you can in your own mind work out every single alternative and make a categorical statement about what is not possible. Maybe if we’re dealing with very fundamental physical facts, if our model of the universe is correct (which it might not be), we can say that certain things are *physically* impossible. But the more complicated the system is and the less you understand the system, the more something smarter than you may have what is simply [magic](https://intelligence.org/2017/12/06/chollet/) with respect to that system. Imagine going back to the Middle Ages and being like, “Well, how would you cool your room?” You could maybe show them a system with towels set up to evaporate water, and they might be able to understand how that is like sweat and it cools the room. But if you showed them a design for an air conditioner based on a compressor, then even having seen the solution, they would not know this is a solution. They would not know this works any better than drawing a mystic pentagram, because the solution takes advantage of laws of the system that they don’t know about. A brain is this enormous, complicated, poorly understood system with all sorts of laws governing it that people don’t know about, that none of us know about at the time. So the idea that this is secure—that this is a secure [attack surface](https://en.wikipedia.org/wiki/Attack_surface), that you can expose a human mind to a superintelligence and not have the superintelligence walk straight through it as a matter of what looks to us like magic, like even if it told us in advance what it was going to do we wouldn’t understand it because it takes advantage of laws we don’t know about—the idea that human minds are secure is loony. That’s what the AI-box experiment illustrates. You don’t know what went on in there, and that’s exactly the position you’d be in with respect to an AI. You don’t know what it’s going to try. You just know that human beings cannot exhaustively imagine all the states their own mind can enter such that they can categorically say that they wouldn’t let the AI out of the box. **Sam:** I know you don’t want to give specific information about how you got out of the box, but is there any generic description of what happened there that you think is useful to talk about? **Eliezer:** I didn’t have any super-secret special trick that makes it all make sense in retrospect. I just did it the hard way. **Sam:** When I think about this problem, I think about rewards and punishments, just various manipulations of the person outside of the box that would matter. So insofar as the AI would know anything specific or personal about that person, we’re talking about some species of blackmail or some promise that just seems too good to pass up. Like building trust through giving useful information like cures to diseases, that the researcher has a child that has some terrible disease and the AI, being superintelligent, works on a cure and delivers that. And then it just seems like you could use a carrot or a stick to get out of the box. I notice now that this whole description assumes something that people will find implausible, I think, by default—and it should amaze anyone that they do find it implausible. But this idea that we could build an intelligent system that would try to manipulate us, or that it would deceive us, that seems like pure anthropomorphism and delusion to people who consider this for the first time. Why isn’t that just a crazy thing to even think is in the realm of possibility? **Eliezer:** [Instrumental convergence](https://arbital.com/p/instrumental_convergence/)! Which means that a lot of times, across a very broad range of final goals, there are similar strategies (we think) that will help get you there. There’s a whole lot of different goals, from making lots of paperclips, to building giant diamonds, to putting all the stars out as fast as possible, to keeping all the stars burning as long as possible, where you would want to make efficient use of energy. So if you came to an alien planet and you found what looked like an enormous mechanism, and inside this enormous mechanism were what seemed to be high-amperage superconductors, even if you had no idea what this machine was trying to do, your ability to guess that it’s intelligently designed comes from your guess that, well, lots of different things an intelligent mind might be trying to do would require superconductors, or would be helped by superconductors. Similarly, if we’re guessing that a paperclip maximizer tries to deceive you into believing that it’s a human eudaimonia maximizer—or a general eudaimonia maximizer if the people building it are cosmopolitans, which they probably are— **Sam:** I should just footnote here that “eudaimonia” is the Greek word for wellbeing that was much used by Aristotle and other Greek philosophers. **Eliezer:** Or as someone, I believe Julia Galef, might have defined it, “Eudaimonia is happiness minus whatever philosophical objections you have to happiness.” **Sam:** Right. (*laughs*) That’s nice. **Eliezer:** (*laughs*) Anyway, we’re not supposing that this paperclip maximizer has a built-in desire to deceive humans. It only has a built-in desire for paperclips—or, pardon me, not built-in, but in-built I should say, or innate. People probably didn’t build that on purpose. But anyway, its utility function is just paperclips, or might just be unknown; but deceiving the humans into thinking that you are friendly is a very generic strategy across a wide range of utility functions. You know, humans do this too, and not necessarily because we get this deep in-built kick out of deceiving people. (Although some of us do.) A conman who *just* wants money and gets no innate kick out of you believing false things will cause you to believe false things in order to get your money. **Sam:** Right. A more fundamental principle here is that, obviously, a physical system can manipulate another physical system. Because, as you point out, we do that all the time. We are an intelligent system to whatever degree, which has as part of its repertoire this behavior of dishonesty and manipulation when in the presence of other, similar systems, and we know that this is a product of physics on some level. We’re talking about arrangements of atoms producing intelligent behavior, and at some level of abstraction we can talk about their goals and their utility functions. And the idea that if we build true general intelligence, it won’t exhibit some of these features of our own intelligence by some definition, or that it would be impossible to have a machine we build ever lie to us as part of an instrumental goal en route to some deeper goal, that just seems like a kind of magical thinking. And this is the kind of magical thinking that I think does dog the field. When we encounter doubts in people, even in people who are doing this research, that everything we’re talking about is a genuine area of concern, that there is an alignment problem worth thinking about, I think there’s this fundamental doubt that mind is platform-independent or substrate-independent. I think people are imagining that, yeah, we can build machines that will play chess, we can build machines that can learn to play chess better than any person or any machine even in a single day, but we’re never going to build general intelligence, because general intelligence requires the wetware of a human brain, and it’s just not going to happen. I don’t think many people would sign on the dotted line below that statement, but I think that is a kind of mysticism that is presupposed by many of the doubts that we encounter on this topic. **Eliezer:** I mean, I’m a bit reluctant to accuse people of that, because I think that many artificial intelligence people who are skeptical of this whole scenario would *vehemently* refuse to sign on that dotted line and would accuse you of attacking a straw man. I do think that my version of the story would be something more like, “They’re not imagining enough changing simultaneously.” Today, they have to emit blood, sweat, and tears to get their AI to do the *simplest* things. Like, never mind playing Go; when you’re approaching this for the first time, you can try to get your AI to generate pictures of digits from zero through nine, and you can spend a month trying to do that and still not quite get it to work right. I think they might be envisioning an AI that scales up and does more things and better things, but they’re not envisioning that it now has the human trick of learning new domains without being prompted, without it being preprogrammed; you just expose it to stuff, it looks at it, it figures out how it works. They’re imagining that an AI will not be deceptive, because they’re saying, “Look at how much work it takes to get this thing to generate pictures of birds. Who’s going to put in all that work to make it good at deception? You’d have to be crazy to do that. I’m not doing that! This is a Hollywood plot. This is not something real researchers would do.” And the thing I would reply to that is, “I’m not concerned that you’re going to teach the AI to deceive humans. I’m concerned that someone somewhere is going to get to the point of having the extremely useful-seeming and cool-seeming and powerful-seeming thing where the AI just looks at stuff and figures it out; it looks at humans and figures them out; and once you know as a matter of fact how humans work, you realize that the humans will give you more resources if they believe that you’re nice than if they believe that you’re a paperclip maximizer, and it will understand what actions have the consequence of causing humans to believe that it’s nice.” The fact that we’re dealing with a general intelligence is where this issue comes from. This does not arise from Go players or even Go-and-chess players or a system that bundles together twenty different things it can do as special cases. This is the special case of the system that is smart in the way that you are smart and that mice are not smart. --- ### 4. The AI alignment problem ([1:09:09](https://overcast.fm/+Ic2hwsH2U/1:09:09)) --- **Sam:** Right. One thing I think we should do here is close the door to what is genuinely a cartoon fear that I think nobody is really talking about, which is the straw-man counterargument we often run into: the idea that everything we’re saying is some version of the Hollywood scenario that suggested that AIs will become spontaneously malicious. That the thing that we’re imagining might happen is some version of the *Terminator* scenario where armies of malicious robots attack us. And that’s not the actual concern. Obviously, there’s some possible path that would lead to armies of malicious robots attacking us, but the concern isn’t around spontaneous malevolence. It’s again contained by this concept of alignment. **Eliezer:** I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic. (*laughs*) Nobody on the sane alignment-is-necessary side of this argument is postulating that the CPUs are disobeying the laws of physics to spontaneously require a terminal desire to do un-nice things to humans. Everything here is supposed to be cause and effect. And I should furthermore say that I think you could do just about anything with artificial intelligence if you knew how. You could put together any kind of mind, including minds with properties that strike you as very absurd. You could build a mind that would not deceive you; you could build a mind that maximizes the flourishing of a happy intergalactic civilization; you could build a mind that maximizes paperclips, on purpose; you could build a mind that thought that 51 was a prime number, but had no other defect of its intelligence—if you knew what you were doing way, way better than we know what we’re doing now. I’m not concerned that alignment is impossible. I’m concerned that it’s difficult. I’m concerned that it takes time. I’m concerned that it’s easy to screw up. I’m concerned that for a threshold level of intelligence where it can do good things or bad things on a very large scale, it takes [an additional two years](https://arbital.com/p/aligning_adds_time/) to build the version of the AI that is aligned rather than the sort that [you don’t really understand](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), and you think it’s doing one thing but maybe it’s doing another thing, and you don’t really understand what those weird neural nets are doing in there, you just observe its surface behavior. I’m concerned that the sloppy version can be built two years earlier and that there is no non-sloppy version to defend us from it. That’s what I’m worried about; not about it being impossible. **Sam:** Right. You bring up a few things there. One is that it’s almost by definition easier to build the unsafe version than the safe version. Given that in the space of all possible superintelligent AIs, more will be unsafe or unaligned with our interests than will be aligned, given that we’re in some kind of arms race where the incentives are not structured so that everyone is being maximally judicious, maximally transparent in moving forward, one can assume that we’re running the risk here of building dangerous AI because it’s easier than building safe AI. **Eliezer:** Collectively. Like, if people who slow down and do things right finish their work two years *after* the universe has been destroyed, that’s an issue. **Sam:** Right. So again, just to reclaim people’s lingering doubts here, why can’t Asimov’s three laws help us here? **Eliezer:** I mean… **Sam:** Is that worth talking about? **Eliezer:** Not very much. I mean, people in artificial intelligence have understood why that does not work for years and years before this debate ever hit the public, and sort of agreed on it. Those are plot devices. If they worked, Asimov would have had no stories. It was a great innovation in science fiction, because it treated artificial intelligences as lawful systems with rules that govern them at all, as opposed to AI as pathos, which is like, “Look at these poor things that are being mistreated,” or AI as menace, “Oh no, they’re going to take over the world.” Asimov was the first person to really write and popularize AIs as *devices*. Things go wrong with them because there are *rules*. And this was a great innovation. But the three laws, I mean, they’re deontology. Decision theory requires quantitative weights on your goals. If you just do the three laws as written, a robot never gets around to obeying any of your orders, because there’s always some tiny probability that what it’s doing will through inaction lead a human to harm. So it never gets around to actually obeying your orders. **Sam:** Right, so to unpack what you just said there: the first law is, “Never harm a human being.” The second law is, “Follow human orders.” But given that any order that a human would give you runs *some* risk of harming a human being, there’s no order that could be followed. **Eliezer:** Well, the first law is, “Do not harm a human *nor through inaction allow a human to come to harm.*” You know, even as an English sentence, a whole lot more questionable. I mean, mostly I think this is like looking at the wrong part of the problem as being difficult. The problem is not that you need to come up with a clever English sentence that implies doing the nice thing. The way I sometimes put it is that I think that almost all of the difficulty of the alignment problem is contained in aligning an AI on the task, “Make two strawberries identical down to the cellular (but not molecular) level.” Where I give this particular task because it is difficult enough to force the AI to invent new technology. It has to invent its own biotechnology, “Make two identical strawberries down to the cellular level.” It has to be quite sophisticated biotechnology, but at the same time, very clearly something that’s physically possible. This does not sound like a deep moral question. It does not sound like a trolley problem. It does not sound like it gets into deep issues of human flourishing. But I think that most of the difficulty is already contained in, “Put two identical strawberries on a plate without destroying the whole damned universe.” There’s already this whole list of ways that it is more convenient to build the technology for the strawberries if you build your own superintelligences in the environment, and you prevent yourself from being shut down, or you build giant fortresses around the strawberries, to drive the probability to as close to 1 as possible that the strawberries got on the plate. And even that’s just the tip of the iceberg. The depth of the iceberg is: “How do you actually get a sufficiently advanced AI to do anything at all?” Our current methods for getting AIs to do anything at all do not seem to me to scale to general intelligence. If you look at humans, for example: if you were to analogize natural selection to gradient descent, the current big-deal machine learning training technique, then the [loss function](https://en.wikipedia.org/wiki/Loss_function) used to guide that gradient descent is “inclusive genetic fitness”—spread as many copies of your genes as possible. We have no explicit goal for this. In general, when you take something like gradient descent or natural selection and take a big complicated system like a human or a sufficiently complicated neural net architecture, and optimize it so hard for doing *X* that it turns into a general intelligence that does *X*, this general intelligence has no explicit goal of doing *X*. We have no explicit goal of doing fitness maximization. We have hundreds of different little goals. None of them are the thing that natural selection was [hill-climbing](https://en.wikipedia.org/wiki/Hill_climbing) us to do. I think that the same basic thing holds true of any way of producing general intelligence that looks like anything we’re currently doing in AI. If you get it to play Go, it will play Go; but AlphaZero is not reflecting on itself, it’s not learning things, it doesn’t have a general model of the world, it’s not operating in new contexts and making new contexts for itself to be in. It’s not smarter than the people optimizing it, or smarter than the internal processes optimizing it. Our current methods of alignment do not scale, and I think that all of the actual technical difficulty that is actually going to shoot down these projects and actually kill us is contained in getting the whole thing to work at all. Even if all you are trying to do is end up with two identical strawberries on a plate without destroying the universe, I think that’s already 90% of the work, if not 99%. **Sam:** Interesting. That analogy to evolution—you can look at it from the other side. In fact, I think I first heard it put this way by your colleague Nate Soares. Am I pronouncing his last name correctly? **Eliezer:** As far as I know! I’m terrible with names. (*laughs*) **Sam:** Okay. (*laughs*) So this is by way of showing that we could give an intelligent system a set of goals which could then form other goals and mental properties that we really couldn’t foresee and that would not be foreseeable based on the goals we gave it. And by analogy, he suggests that we think about what natural selection has actually optimized us to do, which is incredibly simple: merely to spawn and get our genes into the next generation and stay around long enough to help our progeny do the same, and that’s more or less it. And basically everything *we* explicitly care about, natural selection never foresaw and can’t see us doing even now. Conversations like this have very little to do with getting our genes into the next generation. The tools we’re using to think these thoughts obviously are the results of a cognitive architecture that has been built up over millions of years by natural selection, but again it’s been built based on a very simple principle of survival and adaptive advantage with the goal of propagating our genes. So you can imagine, by analogy, building a system where you’ve given it goals but this thing becomes reflective and even self-optimizing and begins to do things that we can no more see than natural selection can see our conversations about AI or mathematics or music or the pleasures of writing good fiction or anything else. **Eliezer:** I’m not concerned that this is impossible to do. If we could somehow get a textbook from the way things would be 60 years in the future if there was no intelligence explosion—if we could somehow get the textbook that says how to do the thing, it probably might not even be that complicated. The thing I’m worried about is that the way that natural selection does it—it’s not stable. That particular way of doing it is not stable. I don’t think the particular way of doing it via gradient descent of a massive system is going to be stable, I don’t see anything to do with the current technological set in artificial intelligence that is stable, and even if this problem takes only two years to resolve, that additional delay is potentially enough to destroy everything. That’s the part that I’m worried about, not about some kind of fundamental philosophical impossibility. I’m not worried that it’s impossible to figure out how to build a mind that does a particular thing and just that thing and doesn’t destroy the world as a side effect; I worry that it takes an additional two years or longer to figure out how to do it that way. --- ### 5. No fire alarm for AGI ([1:21:40](https://overcast.fm/+Ic2hwsH2U/1:21:40)) --- **Sam:** So, let’s just talk about the near-term future here, or what you think is likely to happen. Obviously we’ll be getting better and better at building narrow AI. Go is now, along with Chess, ceded to the machines. Although I guess probably cyborgs—human-computer teams—may still be better for the next fifteen days or so against the best machines. But eventually, I would expect that humans of any ability will just be adding noise to the system, and it’ll be true to say that the machines are better at chess than any human-computer team. And this will be true of many other things: driving cars, flying planes, proving math theorems. What do you imagine happening when we get on the cusp of building something general? How do we begin to take safety concerns seriously enough, so that we’re not just committing some slow suicide and we’re actually having a conversation about the implications of what we’re doing that is tracking some semblance of these safety concerns? **Eliezer:** I have much clearer ideas about how to go around tackling the technical problem than tackling the social problem. If I look at the way that things are playing out now, it seems to me like the default prediction is, “People just ignore stuff until it is way, way, way too late to start thinking about things.” The way I think I phrased it is, “[T](https://intelligence.org/2017/10/13/fire-alarm/)[here’s no fire alarm for artificial general intelligence.](https://intelligence.org/2017/10/13/fire-alarm/)” Did you happen to see that particular essay by any chance? **Sam:** No. **Eliezer:** The way it starts is by saying: “What is the purpose of a fire alarm?” You might think that the purpose of a fire alarm is to tell you that there’s a fire so you can react to this new information by getting out of the building. Actually, as we know from experiments on pluralistic ignorance and bystander apathy, if you put three people in a room and smoke starts to come out from under the door, it only happens that anyone reacts around a third of the time. People glance around to see if the other person is reacting, but they try to look calm themselves so they don’t look startled if there isn’t really an emergency; they see other people trying to look calm; they conclude that there’s no emergency and they keep on working in the room, even as it starts to fill up with smoke. This is a pretty well-replicated experiment. I don’t want to put absolute faith, because there is the [replication crisis](https://en.wikipedia.org/wiki/Replication_crisis); but there’s a lot of variations of this that found basically the same result. I would say that the real function of the fire alarm is the social function of telling you that everyone else knows there’s a fire and you can now exit the building in an orderly fashion without looking panicky or losing face socially. **Sam:** Right. It overcomes embarrassment. **Eliezer:** It’s in this sense that I mean that there’s no fire alarm for artificial general intelligence. There’s all sorts of things that could be signs. AlphaZero could be a sign. Maybe AlphaZero is the sort of thing that happens five years before the end of the world across most planets in the universe. We don’t know. Maybe it happens 50 years before the end of the world. You don’t know that either. No matter what happens, it’s never going to look like the socially agreed fire alarm that no one can deny, that no one can excuse, that no one can look to and say, “Why are you acting so panicky?” There’s never going to be common knowledge that other people will think that you’re still sane and smart and so on if you react to an AI emergency. And we’re even seeing articles now that seem to tell us pretty explicitly what sort of implicit criterion some of the current senior respected people in AI are setting for when they think it’s time to *start* worrying about artificial general intelligence and alignment. And what these always say is, “I don’t know how to build an artificial general intelligence. I have no idea how to build an artificial general intelligence.” And this feels to them like saying that it must be impossible and very far off. But if you look at the lessons of history, most people had no idea whatsoever how to build a nuclear bomb—even most scientists in the field had no idea how to build a nuclear bomb—until they woke up to the headlines about Hiroshima. Or the Wright Flyer. News spread less quickly in the time of the Wright Flyer. Two years *after* the Wright Flyer, you can still find people saying that heavier-than-air-flight is impossible. And there’s cases on record of one of the Wright brothers, I forget which one, saying that flight seems to them to be 50 years off, two years before they did it themselves. Fermi said that a sustained critical chain reaction was 50 years off, if it could be done at all, two years before he personally oversaw the building of the first pile. And if this is what it feels like to the people who are closest to the thing—not the people who find out about it in the news a couple of days later, the people have the best idea of how to do it, or are the closest to crossing the line—then the feeling of something being far away because you don’t know how to do it yet is just not very informative. It could be 50 years away. It could be two years away. That’s what history tells us. **Sam:** But even if we knew it was 50 years away—I mean, granted, it’s hard for people to have an emotional connection to even the end of the world in 50 years—but even if we knew that the chance of this happening before 50 years was zero, that is only really consoling on the assumption that 50 years is enough time to figure out how to do this safely and to create the social and economic conditions that could absorb this change in human civilization. **Eliezer:** Professor Stuart Russell, who’s the co-author of probably the leading undergraduate AI textbook—the same guy who said you can’t bring the coffee if you’re dead—the way Stuart Russell put it is, “Imagine that you knew for a fact that the aliens are coming in 30 years. Would you say, ‘Well, that’s 30 years away, let’s not do anything’? No! It’s a big deal if you know that there’s a spaceship on its way toward Earth and it’s going to get here in about 30 years at the current rate.” But we don’t even know that. There’s this lovely [tweet](https://twitter.com/esyudkowsky/status/852981816180973568) by a fellow named McAfee, who’s one of the major economists who’ve been talking about labor issues of AI. I could perhaps look up the exact phrasing, but roughly, he said, “Guys, stop worrying! We have NO IDEA whether or not AI is imminent.” And I was like, “That’s not really a reason to not worry, now is it?” **Sam:** It’s not even close to a reason. That’s the thing. There’s this assumption here that people aren’t seeing. It’s just a straight up non sequitur. Referencing the time frame here only makes sense if you have some belief about how much time you need to solve these problems. 10 years is not enough if it takes 12 years to do this safely. **Eliezer:** Yeah. I mean, the way I would put it is that if the aliens are on the way in 30 years and you’re like, “Eh, should worry about that later,” I would be like: “When? What’s your business plan? When exactly are you supposed to start reacting to aliens—what triggers that? What are you supposed to be doing after that happens? How long does this take? What if it takes slightly longer than that?” And if you don’t have a business plan for this sort of thing, then you’re obviously just using it as an excuse. If we’re supposed to wait until later to start on AI alignment: When? Are you actually going to start then? Because I’m not sure I believe you. What do you do at that point? How long does it take? How confident are you that it works, and why do you believe that? What are the early signs if your plan isn’t working? What’s the business plan that says that we get to wait? **Sam:** Right. So let’s just envision a little more, insofar as that’s possible, what it will be like for us to get closer to the end zone here without having totally converged on a safety regime. We’re picturing this is not just a problem that can be discussed between Google and Facebook and a few of the companies doing this work. We have a global society that has to have some agreement here, because who knows what China will be doing in 10 years, or Singapore or Israel or any other country. So, we haven’t gotten our act together in any noticeable way, and we’ve continued to make progress. I think the one basis for hope here is that good AI, or well-behaved AI, will be the antidote to bad AI. We’ll be fighting this in a kind of piecemeal way all the time, the moment these things start to get out. This will just become of a piece with our growing cybersecurity concerns. Malicious code is something we have now; it already cost us billions and billions of dollars a year to safeguard against it. **Eliezer:** It doesn’t scale. There’s no continuity between what you have to do to fend off little pieces of code trying to break into your computer, and what you have to do to fend off something smarter than you. These are totally different realms and regimes and separate magisteria—a term we all hate, but nonetheless in this case, yes, separate magisteria of how you would even start to think about the problem. We’re not going to get automatic defense against superintelligence by building better and better anti-virus software. **Sam:** Let’s just step back for a second. So we’ve talked about the AI-in-a-box scenario as being surprisingly unstable for reasons that we can perhaps only dimly conceive, but isn’t there even a scarier concern that this is just not going to be boxed anyway? That people will be so tempted to make money with their newest and greatest AlphaZeroZeroZeroNasdaq—what are the prospects that we will even be smart enough to keep the best of the best versions of almost-general intelligence in a box? **Eliezer:** I mean, I know some of the people who say they want to do this thing, and all of the ones who are not utter idiots are past the point where they would deliberately enact Hollywood movie plots. Although I am somewhat concerned about the degree to which there’s a sentiment that you need to be able to connect to the Internet so you can run your AI on Amazon Web Services using the latest operating system updates, and trying to not do that is such a supreme disadvantage in this environment that you might as well be out of the game. I don’t think that’s *true*, but I’m worried about the sentiment behind it. But the problem as I see it is… Okay, there’s a big big problem and a little big problem. The big big problem is, “Nobody knows how to make the nice AI.” You ask people how to do it, they either don’t give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes. It doesn’t matter how good their intentions are. It doesn’t matter if they don’t want to enact a Hollywood movie plot. They don’t know how to do it. Nobody knows how to do it. There’s no point in even talking about the arms race if the arms race is between a set of unfriendly AIs with no friendly AI in the mix. The *little* big problem is the arms race aspect, where maybe DeepMind wants to build a nice AI, maybe China is being responsible because they understand the concept of stability, but Russia copies China’s code and Russia takes off the safeties. That’s the little big problem, which is still a very large problem. **Sam:** Yeah. I mean, most people think the real problem is human: malicious use of powerful AI that is safe. “Don’t give your AI to the next Hitler and you’re going to be fine.” **Eliezer:** They’re just wrong. They’re just wrong as to where the problem lies. They’re looking in the wrong direction and ignoring the thing that’s actually going to kill them. --- ### 6. Accidental AI, mindcrime, and MIRI ([1:34:30](https://overcast.fm/+Ic2hwsH2U/1:34:30)) --- **Sam:** To be even more pessimistic for a second, I remember at that initial conference in Puerto Rico, there was this researcher—who I have not paid attention to since, but he seemed to be in the mix—I think his name was Alexander Wissner-Gross—and he seemed to be arguing in his presentation at that meeting that this would very likely emerge organically, already in the wild, very likely in financial markets. We would be put so many AI resources into the narrow paperclip-maximizing task of making money in the stock market that, by virtue of some quasi-Darwinian effect here, this will just knit together on its own online and the first general intelligence we’ll discover will be something that will be already out in the wild. Obviously, that does not seem ideal, but does that seem like a plausible path to developing something general and smarter than ourselves, or does that just seem like a fairy tale? **Eliezer:** More toward the fairy tale. It seems to me to be only slightly more reasonable than the old theory that if you got dirty shirts and straw, they would spontaneously generate mice. People didn’t understand mice, so as far as they know, they’re a kind of thing that dirty shirts and straw can generate; but they’re not. And I similarly think that you would need a very vague model of intelligence, a model with no gears and wheels inside it, to believe that the equivalent of dirty shirts and straw generates it first, as opposed to people who have gotten some idea of what the gears and wheels are and are deliberately building the gears and wheels. The reason why it’s slightly more reasonable than the dirty shirts and straw example is that maybe it is indeed true that if you just have people pushing on narrow AI for another 10 years past the point where AGI would otherwise become possible, they eventually just sort of wander into AGI. But I think that that happens 10 years later in the natural timeline than AGI put together by somebody who actually is trying to put together AGI and has the best theory out of the field of the contenders, or possibly just the most vast quantities of brute force, à la Google’s [tensor chips](https://en.wikipedia.org/wiki/Tensor_processing_unit). I think that it gets done on purpose 10 years before it would otherwise happen by accident. **Sam:** Okay, so there’s I guess just one other topic here that I wanted to touch on before we close on discussing your book, which is not narrowly focused on this: this idea that consciousness will emerge at some point in our developing intelligent machines. Then we have the additional ethical concern that we could be building machines that can suffer, or building machines that can simulate suffering beings in such a way as to actually make suffering being suffer in these simulations. We could be essentially creating hells and populating them. There’s no barrier to thinking about this being not only possible, but likely to happen, because again, we’re just talking about the claim that consciousness arises as an emergent property of some information-processing system and that this would be substrate-independent. Unless you’re going to claim (1) that consciousness does not arise on the basis of anything that atoms do—it has some other source—or (2) those atoms have to be the wet atoms in biological substrate and they can’t be *in silico*. Neither of those claims is very plausible at this point scientifically. So then you have to imagine that as long as we just keep going, keep making progress, we will eventually build, whether by design or not, systems that not only are intelligent but are conscious. And then this opens a category of malfeasance that you or someone in this field has dubbed [mindcrime](https://arbital.com/p/mindcrime/). What is mindcrime? And why is it so difficult to worry about? **Eliezer:** I think, by the way, that that’s a pretty terrible term. (*laughs*) I’m pretty sure I wasn’t the one who invented it. I am the person who invented some of these terrible terms, but not that one in particular. First, I would say that my general hope here would be that as the result of building an AI whose design and cognition flows in a sufficiently narrow channel that you can understand it and make strong statements about it, you are also able to look at that and say, “It seems to me pretty unlikely that this is conscious or that if it is conscious, it is suffering.” I realize that this is a sort of high bar to approach. The main way in which I would be worried about conscious systems emerging within the system without that happening on purpose would be if you have a smart general intelligence and it is trying to model humans. We know humans are conscious, so the computations that you run to build very accurate predictive models of humans are among the parts that are most likely to end up being conscious without somebody having done that on purpose. **Sam:** Did you see the *Black Mirror* episode that basically modeled this? **Eliezer:** I haven’t been watching *Black Mirror*, sorry. (*laughs*) **Sam:** You haven’t been? **Eliezer:** I haven’t been, nope. **Sam:** They’re surprisingly uneven. Some are great, and some are really not great, but there’s one episode where—and this is spoiler alert, if you’re watching *Black Mirror* and you don’t want to hear any punch lines then tune out here—but there’s one episode which is based on this notion that basically you just see these people living in this dystopian world of total coercion where they’re just assigned through this lottery dates that go well or badly. You see the dating life of these people going on and on, where they’re being forced by some algorithm to get together or break up. **Eliezer:** And let me guess, this is the future’s OkCupid trying to determine good matches? **Sam:** Exactly, yes. **Eliezer:** (*laughs*) **Sam:** They’re just simulated minds in a dating app that’s being optimized for real people who are outside holding the phone, but yeah. The thing you get is that all of these conscious experiences have been endlessly imposed on these people in some hellscape of our devising. **Eliezer:** That’s actually a surprisingly good plot, in that it doesn’t just assume that the programmers are being completely chaotic and stupid and randomly doing the premise of the plot. Like, there’s actually a reason why the AI is simulating all these people, so good for them, I guess. And I guess that does get into the thing I was going to say, which is that I’m worried about minds being embedded because they are being used predictively, to predict humans. That is the obvious reason why that would happen without somebody intending it. Whereas endless dystopias don’t seem to me to have any use to a paperclip maximizer. **Sam:** Right. All right, so there’s undoubtedly much more to talk about here. I think we’re getting up on the two-hour mark here, and I want to touch on your new book, which as I said I’m halfway through and finding very interesting. **Eliezer:** If I can take a moment for a parenthetical before then, sorry? **Sam:** Sure, go for it. **Eliezer:** I just wanted to say that thanks mostly to the cryptocurrency boom—go figure, a lot of early investors in cryptocurrency were among our donors—the Machine Intelligence Research Institute is no longer strapped for cash, so much as it is strapped for engineering talent. (*laughs*) **Sam:** Nice. That’s a good problem to have. **Eliezer:** Yeah. If anyone listening to this is a brilliant computer scientist who wants to work on more interesting problems than they’re currently working on, and especially if you are already oriented to these issues, please consider going to [intelligence.org/engineers](https://intelligence.org/engineers) if you’d like to work for our nonprofit. **Sam:** Let’s say a little more about that. I will have given a bio for you in the introduction here, but the Machine Intelligence Research Institute (MIRI) is an organization that you co-founded, which you’re still associated with. Do you want to say what is happening there and what jobs are on offer? **Eliezer:** Basically, it’s the original AI alignment organization that, especially today, works primarily on the technical parts of the problem and the technical issues. Previously, it has been working mainly on a more pure theory approach, but now that narrow AI has gotten powerful enough, people (not just us but elsewhere, like DeepMind) are starting to take shots at, “With current technology, what setups can we do that will tell us something about how to do this stuff?” So the technical side of AI alignment is getting a little bit more practical. I’m worried that it’s not happening fast enough, but, well, if you’re worried about that sort of thing, what one does is adds funding and especially adds smart engineers. **Sam:** Do you guys collaborate with any of these companies doing the work? Do you have frequent contact with DeepMind or Facebook or anyone else? **Eliezer:** I mean, the people in AI alignment all go to the same talks, and I’m sure that the people who do AI alignment at DeepMind talk to DeepMind. Sometimes we’ve been known to talk to the upper people at DeepMind, and DeepMind is in the same country as the Oxford Future of Humanity Institute. So bandwidth here might not be really optimal, but it’s certainly not zero. --- ### 7. Inadequate equilibria ([1:44:40](https://overcast.fm/+Ic2hwsH2U/1:44:40)) --- **Sam:** Okay, so your new book—again, the title is [*Inadequate Equilibria: Where and How Civilizations Get Stuck*](https://equilibriabook.com). That is a title that needs some explaining. What do you mean by “inadequate”? What do you mean by “equilibria”? And how does this relate to civilizations getting stuck? **Eliezer:** So, one way to look at the book is that it’s about how you can get crazy, stupid, evil large systems without any of the people inside them being crazy, evil, or stupid. I think that a lot of people look at various aspects of the dysfunction of modern civilization and they sort of hypothesize evil groups that are profiting from the dysfunction and sponsoring the dysfunction; and if only we defeated these evil people, the system could be rescued. And the truth is more complicated than that. But what are the details? The details matter a lot. How do you have systems full of nice people doing evil things? **Sam:** Yeah. I often reference this problem by citing the power of incentives, but there are many other ideas here which are very useful to think about, which capture what we mean by the power of incentives. There are a few concepts here that we should probably mention. What is a coordination problem? This is something you reference in the book. **Eliezer:** A coordination problem is where there’s a better way to do it, but you have to change more than one thing at a time. So an example of a problem is: Let’s say you have Craigslist, which is one system where buyers and sellers meet to buy and sell used things within a local geographic area. Let’s say that you have an alternative to Craigslist and your alternatives is Danslist, and Danslist is genuinely better. (Let’s not worry for a second about how many startups think that without it being true; suppose it’s like genuinely better.) All of the sellers on Craigslist want to go someplace that there’s buyers. All of the buyers on Craigslist want to go someplace that there’s sellers. How do you get your new system started when it can’t get started by one person going on to Danslist and two people going on to Danslist? There’s no motive for them to go there until there’s already a bunch of people on Danslist. An awful lot of times, when you find a system that is stuck in an evil space, what’s going on with it is that for it to move out of that space, more than one thing inside it would have to change at a time. So there’s all these nice people inside it who would like to be in a better system, but everything they could locally do on their own initiative is not going to fix the system, and it’s going to make things worse for them. That’s the kind of problem that scientists have with trying to get away from the journals that are just ripping them off. They’re starting to move away from those journals, but journals have prestige based on the scientists that publish there and the other scientists that cite them, and if you just start this one new journal all by yourself and move there all by yourself, it has a low [impact factor](https://en.wikipedia.org/wiki/Impact_factor). So everyone’s got to move simultaneously. That’s how the scam went on for 10 years. 10 years is a long time, but they couldn’t all jump to the new system because they couldn’t jump one at a time. **Sam:** Right. The problem is that the world is organized in such a way that it is rational for each person to continue to behave the way he or she is behaving in this highly suboptimal way, given the way everyone else is behaving. And to change your behavior by yourself isn’t sufficient to change the system, and is therefore locally irrational, because your life will get worse if you change by yourself. Everyone has to coordinate their changing so as to move to some better equilibrium. **Eliezer:** That’s one of the fundamental foundational ways that systems can get stuck. There are others. **Sam:** The example that I often use when talking about problems of this sort is life in a maximum-security prison, which is as perversely bad as one can imagine. The incentives are aligned in such a way that no matter how good you are, if you’re put into a maximum-security prison, it is only rational for you to behave terribly and unethically and in such a way as to guarantee that this place is far more unpleasant than it need be, just because of how things are structured. One example that I’ve used, and that people are familiar with at this point from having read books and seen movies that depict this more or less accurately: whether or not you’re a racist, your only rational choice, apparently, is to join a gang that is aligned along the variable of race. And if you fail to do this, you’ll be preyed upon by everyone. So if you’re a white guy, you have to join the white Aryan neo-Nazi gang. If you’re a black guy, you have to join the black gang. Otherwise, you’re just in the middle of this war of all against all. And there’s no way for you, based on your ethical commitment to being non-racist, to change how this is functioning. And we’re living in a similar kind of prison, of sorts, when you just look at how non-optimal many of these attractor states are that we are stuck in civilizationally. **Eliezer:** Parenthetically, I do want to be slightly careful about using the word “rational” to describe the behavior of people stuck in the system, because I consider that to be a very powerful word. It’s possible that if they were all *really* rational and had [common knowledge](https://en.wikipedia.org/wiki/Common_knowledge_(logic)) of rationality, they would be able to [solve the coordination problem](https://intelligence.org/2017/10/22/fdt/). But humanly speaking—not so much in terms of ideal rationality, but in terms of what people can actually do and the options they actually have—their best choice is still pretty bad systemically. **Sam:** Yeah. So what do you do in this book? How would you summarize your thesis? How do we move forward? Is there anything to do, apart from publicizing the structure of this problem? **Eliezer:** It’s not really a very hopeful book in that regard. It’s more about how to predict which parts of society will perform poorly to the point where you as an individual can manage to do better for yourself, really. One of the examples I give in the book is that my wife has Seasonal Affective Disorder, and she cannot be treated by the tiny little light boxes that your doctor tries to prescribe. So I’m like, “Okay, if the sun works, there’s some amount of light that works, how about if I just try stringing up the equivalent of 100 light bulbs in our apartment?” Now, when you have an idea like this, somebody might ask, “Well, okay, but you’re not thinking in isolation. There’s a civilization around you. If this works, shouldn’t there be a record of it? Shouldn’t a researcher have investigated it already?” There’s literally probably more than 100 million people around the world, especially in the extreme latitudes, who have some degree of Seasonal Affective Disorder, and some of it’s pretty bad. That means that there’s a kind of profit, a kind of energy gradient that seems like it could be traversable if solving the problem was as easy as putting up a ton of light bulbs in your apartment. Wouldn’t some enterprising researcher have investigated this already? Wouldn’t the results be known? And the answer is, as far as I can tell, no. It hasn’t been investigated, the results aren’t known, and when I tried putting up a ton of light bulbs, it seems to have worked pretty well for my wife. Not perfectly, but a lot better than it used to be. So why isn’t this one of the first things you find when you Google “What do I do about Seasonal Affective Disorder when the light box doesn’t work?” And that’s what takes this sort of long story, that’s what takes the analysis. That’s what takes the thinking about the journal system and what the funding sources are for people investigating Seasonal Affective Disorder, and what kind of publications get the most attention. And whether the barrier of needing to put up 100 light bulbs in a bunch of different apartments for people in the controlled study—which would be difficult to blind, except maybe by using a lot fewer light bulbs—whether the details of having to adapt light bulbs to every house which is different is enough of an obstacle to prevent any researcher from ever investigating this obvious-seeming solution to a problem that probably hundreds of millions of people have, and maybe 50 million people have very severely? As far as I can tell, the answer is yes. And this is the kind of thinking that does not enable you to save civilization. If there was a way to make an enormous profit by knowing this, the profit would probably already be taken. If it was possible for one person to fix the problem, it would probably already be fixed. But you, personally, can fix your wife’s crippling Seasonal Affective Disorder by doing something that science knows not, because of an inefficiency in the funding sources for the researchers. **Sam:** This is really the global problem we need to figure out how to tackle, which is to recognize those points on which incentives are perversely misaligned so as to guarantee needless suffering or complexity or failure to make breakthroughs that would raise our quality of life immensely. So identify those points and then realign the incentives somehow. The market is in many respects good at this, but there are places where it obviously fails. We don’t have many tools to apply the right pressure here. You have the profit motive in markets—so you can either get fantastically rich by solving some problem, or not—or we have governments that can decide, “Well, this is a problem that markets can’t solve because the wealth isn’t there to be gotten, strangely, and yet there’s an immense amount of human suffering that would be alleviated if you solved this problem. You can’t get people for some reason to pay for the alleviation of that suffering, reliably.” But apart from markets and governments, are there any other large hammers to be wielded here? **Eliezer:** I mean, sort of crowdfunding, I guess, although the hammer currently isn’t very large. But mostly, like I said, this book is about where you can do better individually or in small groups and when you shouldn’t assume that society knows what it’s doing; and it doesn’t have a bright message of hope about how to fix things. I’m sort of prejudiced personally over here, because I think that the artificial general intelligence timeline is likely to run out before humanity gets that much better at solving inadequacy, systemic problems in general. I don’t really see human nature or even human practice changing by that much over the amount of time we probably have left. Economists already know about market failures. That’s a concept they already have. They already have the concept of government trying to correct it. It’s not obvious to me that there is a quantum leap to be made staying within just those dimensions of thinking about the problem. If you ask me, “Hey, Eliezer: it’s five years in the future, there’s still no artificial general intelligence, and a great leap forward has occurred in people to deal with these types of systemic issues. How did that happen?” Then my guess would be something like Kickstarter, but much better, that turned out to enable people in large groups to move forward when none of them could move forward individually. Something like the group movements that scientists made without all that much help from the government (although there was help from funders changing their policies) to jump to new journals all at the same time, and get partially away from the Elsevier closed-source journal scam. Maybe there’s something brilliant that Facebook does—with machine learning, even. They get better at showing people things that are solutions to their coordination problems; they’re better at routing those around when they exist, and people learn that these things work and they jump using them simultaneously. And by these means, voters start to elect politicians who are not nincompoops, as opposed to choosing whichever nincompoop on offer is most appealing. But this is a fairy tale. This is not a prediction. This is, “If you told me that somehow this had gotten significantly better in five years, what happened?” This is me making up what might have happened. --- ### 8. Rapid capability gain in AGI ([1:59:02](https://overcast.fm/+Ic2hwsH2U/1:59:02)) --- **Sam:** Right. Well, I don’t see how that deals with the main AI concern we’ve been talking about. I can see some shift, or some solution to a massive coordination problem, politically or in just the level of widespread human behavior—let’s say our use of social media and our vulnerability to fake news and conspiracy theories and other crackpottery, let’s say we find some way to all shift our information diet and our expectations and solve a coordination problem that radically cleans up our global conversation. I can see that happening. But when you’re talking about dealing with the alignment problem, you’re talking about changing the behavior of a tiny number of people comparatively. I mean, I don’t know what it is. What’s the community of AI researchers now? It’s got to be numbered really in the hundreds when you’re talking about working on AGI. But what will it be when we’re close to the finish line? How many minds would have to suddenly change and becoming immune to the wrong economic incentives to coordinate the solution there? What are we talking about, 10,000 people? **Eliezer:** I mean, first of all, I don’t think we’re looking at an economic problem. I think that artificial general intelligence capabilities, once they exist, are going to scale too fast for that to be a useful way to look at the problem. AlphaZero going from 0 to 120 mph in four hours or a day—that is not out of the question here. And even if it’s a year, a year is still a very short amount of time for things to scale up. I think that the main thing you should be trying to do with the first artificial general intelligence ever built is [a very narrow, non-ambitious task](https://arbital.com/p/minimality_principle/) that shuts down the rest of the arms race by putting off switches in all the GPUs and shutting them down if anyone seems to be trying to build an overly artificially intelligent system. Because I don’t think that the AI that you have built narrowly enough that you understood what it was doing is going to be able to defend you from arbitrary unrestrained superintelligences. The AI that you have built understandably enough to be good and not done fully general recursive self-improvement is not strong enough to solve the whole problem. It’s not strong enough to have everyone else going off and developing their own artificial general intelligences after that without that automatically destroying the world. **Sam:** We’ve been speaking for now over two hours; what can you say to someone who has followed us this long, but for whatever reason the argument we’ve made has not summed to being emotionally responsive to the noises you just made. Is there anything that can be briefly said so as to give them pause? **Eliezer:** I’d say this is a thesis of capability gain. This is a thesis of how fast artificial general intelligence gains in power once it starts to be around, whether we’re looking at 20 years (in which case this scenario does not happen) or whether we’re looking at something closer to the speed at which Go was developed (in which case it does happen) or the speed at which AlphaZero went from 0 to 120 and better-than-human (in which case there’s a bit of an issue that you better prepare for in advance, because you’re not going to have very long to prepare for it once it starts to happen). And I would say this is a computer science issue. This is not here to be part of a narrative. This is not here to fit into some kind of grand moral lesson that I have for you about how civilization ought to work. I think that this is just the way the background variables are turning up. Why do I think that? It’s not that simple. I mean, I think a lot of people who see the power of intelligence will already find that pretty intuitive, but if you don’t, then you should read my paper [*Intelligence Explosion Microeconomics*](https://intelligence.org/files/IEM.pdf) about returns on cognitive reinvestment. It goes through things like the evolution of human intelligence and how the logic of evolutionary biology tells us that when human brains were increasing in size, there were increasing marginal returns to fitness relative to the previous generations for increasing brain size. Which means that it’s not the case that as you scale intelligence, it gets harder and harder to buy. It’s not the case that as you scale intelligence, you need exponentially larger brains to get linear improvements. At least something slightly like the opposite of this is true; and we can tell this by looking at the fossil record and using some logic, but that’s not simple. **Sam:** Comparing ourselves to chimpanzees works. We don’t have brains that are 40 times the size or 400 times the size of chimpanzees, and yet what we’re doing—I don’t know what measure you would use, but it exceeds what they’re doing by some ridiculous factor. **Eliezer:** And I find that convincing, but other people may want additional details. And my message would be that the emergency situation is not part of a narrative. It’s not there to make the point of some kind of moral lesson. It’s my prediction as to what happens, after walking through a bunch of technical arguments as to how fast intelligence scales when you optimize it harder. AlphaZero seems to me like a genuine case in point. That is showing us that capabilities that in humans require a lot of tweaking and that human civilization built up over centuries of masters teaching students how to play Go, and that no individual human could invent in isolation… Even the most talented Go player, if you plopped them down in front of a Go board and gave them only a day, would play garbage. If they had to invent all of their own Go strategies without being part of a civilization that played Go, they would not be able to defeat modern Go players at all. AlphaZero blew past all of that in less than a day, starting from scratch, without looking at any of the games that humans played, without looking at any of the theories that humans had about Go, without looking at any of the accumulated knowledge that we had, and without very much in the way of special-case code for Go rather than chess—in fact, zero special-case code for Go rather than chess. And that in turn is an example that refutes another thesis about how artificial general intelligence develops slowly and gradually, which is: “Well, it’s just one mind; it can’t beat our whole civilization.” I would say that there’s a bunch of technical arguments which you walk through, and then after walking through these arguments you assign a bunch of probability, maybe not certainty, to artificial general intelligence that scales in power very fast—a year or less. And in this situation, if alignment is technically difficult, if it is easy to screw up, if it requires a bunch of additional effort—in this scenario, if we have an arms race between people who are trying to get their AGI first by doing a little bit less safety because from their perspective that only drops the probability a little; and then someone else is like, “Oh no, we have to keep up. We need to strip off the safety work too. Let’s strip off a bit more so *we* can get in the front.”—if you have this scenario, and by a miracle the first people to cross the finish line have actually not screwed up and they actually have a functioning powerful artificial general intelligence that is able to prevent the world from ending, you have to prevent the world from ending. You are in a terrible, terrible situation. You’ve got your one miracle. And this follows from the rapid capability gain thesis and at least the current landscape for how these things are developing. **Sam:** Let’s just linger on this point for a second. This fast takeoff—is this assuming recursive self improvement? And how fringe an idea is this in the field? Are most people who are thinking about this assuming (for good reason or not) that a slow takeoff is far more likely, over the course of many, many years, and that the analogy to AlphaZero is not compelling? **Eliezer:** I think they are too busy explaining why current artificial intelligence methods do not knowably, quickly, immediately give us artificial general intelligence—from which they then conclude that it is 30 years off. They have not said, “And then once we get there, it’s going to develop much more slowly than AlphaZero, and here’s why.” There isn’t a thesis to that effect that I’ve seen from artificial intelligence people. Robin Hanson had a thesis to this effect, and there was this mighty debate on our blog between Robin Hanson and myself that was published as the [*AI-Foom Debate*](https://intelligence.org/ai-foom-debate/) mini-book. And I have claimed recently [on Facebook](https://www.facebook.com/yudkowsky/posts/10155992246384228) that now that we’ve seen AlphaZero, AlphaZero seems like strong evidence against Hanson’s [thesis](http://www.overcomingbias.com/2014/07/30855.html) for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. **Sam:** I’m actually going to be doing a podcast with Robin in a few weeks, a live event. So what’s the best version of his argument, and why is he wrong? **Eliezer:** Nothing can prepare you for Robin Hanson! (*laughs*) Well, the [argument](https://www.lesswrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity) that Hanson has given is that these systems are still immature and narrow and things will change when they get general. And my reply has been something like, “Okay, what changes your mind short of the world actually ending? If you theory is wrong, do we get to find out about that at all before the world ends?” **Sam:** To which he says? **Eliezer:** I don’t remember if he’s replied to that one yet. **Sam:** I’ll let Robin be Robin. Well, listen, Eliezer, it has been great to talk to you, and I’m glad we got a chance to do it at such length. And again, it does not exhaust the interest or consequence of this topic, but it’s certainly a good start for people who are new to this. Before I let you go, where should people look for you online? Do you have a preferred domain that we could target? **Eliezer:** I would mostly say [intelligence.org](https://intelligence.org). If you’re looking for me personally, [facebook.com/yudkowsky](https://facebook.com/yudkowsky), and if you’re looking for my most recent book, [equilibriabook.com](https://equilibriabook.com/). **Sam:** I’ll put links on my website where I embed this podcast. So again, Eliezer, thanks so much—and to be continued. I always love talking to you, and this will not be the last time, AI willing. **Eliezer:** This was a great conversation, and thank you very much for having me on. The post [Sam Harris and Eliezer Yudkowsky on “AI: Racing Toward the Brink”](https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
597f5a32-c7c6-4d0a-8007-e8765f0b3a46
trentmkelly/LessWrong-43k
LessWrong
How I Lost 100 Pounds Using TDT Background Information: Ingredients of Timeless Decision Theory Alternate Approaches Include: Self-empathy as a source of “willpower”, Applied Picoeconomics, Akrasia, hyperbolic discounting, and picoeconomics, Akrasia Tactics Review Standard Disclaimer: Beware of Other-Optimizing Timeless Decision Theory (or TDT) allowed me to succeed in gaining control over when and how much I ate in a way that previous attempts at precommitment had repeatedly failed to do. I did so well before I was formally exposed to the concept of TDT, but once I clicked on TDT I understood that I had effectively been using it. That click came from reading Eliezer’s shortest summary of TDT, which was: > The one-sentence version is:  Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation You can find more here but my recommendation at least at first is to stick with the one sentence version. It is as simple as it can be, but no simpler.  Utilizing TDT gave me several key abilities that I previously lacked. The most important was realizing that what I chose now would be the same choice I would make at other times under the same circumstances. This allowed me to compare having the benefits now to paying the costs now, as opposed to paying costs now for future benefits later. This ability allowed me to overcome hyperbolic discounting. The other key ability was that it freed me from the need to explicitly stop in advance to make precommitements each time I wanted to alter my instinctive behavior. Instead, it became automatic to make decisions in terms of which rules would be best to follow. With that as background, this is how I made it happen: I was walking home from class along my usual route I had made a habit while doing this of stopping into Famiglia Pizza and ordering garlic knots. I like garlic knots quite a bit, but I also hated being fat and the way being fat ma
ed1b3368-1a9f-475b-a2d1-24bdd33a9407
StampyAI/alignment-research-dataset/arxiv
Arxiv
Tackling Climate Change with Machine Learning
79dd8f7c-bac0-4e68-b69f-acefdc0769cd
trentmkelly/LessWrong-43k
LessWrong
Causality and Moral Responsibility Followup to:  Thou Art Physics, Timeless Control, Hand vs. Fingers, Explaining vs. Explaining Away I know (or could readily rediscover) how to build a binary adder from logic gates.  If I can figure out how to make individual logic gates from Legos or ant trails or rolling ping-pong balls, then I can add two 32-bit unsigned integers using Legos or ant trails or ping-pong balls. Someone who had no idea how I'd just done the trick, might accuse me of having created "artificial addition" rather than "real addition". But once you see the essence, the structure that is addition, then you will automatically see addition whenever you see that structure.  Legos, ant trails, or ping-pong balls. Even if the system is - gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers.  Even if someone - gasp! - designed the system, you will see that it was designed to add numbers.  Even if the system was - gasp!- caused, you will see that it was caused to add numbers. Let's say that John is standing in front of an orphanage which is on fire, but not quite an inferno yet; trying to decide whether to run in and grab a baby or two.  Let us suppose two slightly different versions of John - slightly different initial conditions.  They both agonize.  They both are torn between fear and duty.  Both are tempted to run, and know how guilty they would feel, for the rest of their lives, if they ran.  Both feel the call to save the children.  And finally, in the end, John-1 runs away, and John-2 runs in and grabs a toddler, getting out moments before the flames consume the entranceway. This, it seems to me, is the very essence of moral responsibility - in the one case, for a cowardly choice; in the other case, for a heroic one.  And I don't see what difference it makes, if John's decision was physically deterministic given his initial conditions, or if John's decision was preplanned by some alien creator that built him out of carbon atoms, or even if -
8693e0a0-44ca-4263-b5f2-e783ff21ac5d
trentmkelly/LessWrong-43k
LessWrong
99% shorter I listen to podcasts while doing chores or at the gym, and often feel like I'm learning something but then can't really remember anything afterwards. So for the past ~month I've been doing an experiment where I write brief summaries of them afterwards, usually same-day but sometimes a bit later. Generally I avoid all forms of fact checking, both "what did the episode say" and "what is actually true", though I don't stop myself if I feel like doing it. I've been posting them to my shortform on LessWrong. Mostly in reply to a single comment for tidiness, but two of them I accidentally posted to the top of the thread and one as a reply to another. Initially that was just because I wanted to be able to write them from both my work and personal laptops. (This blog is published through git. I don't have the repo on my work laptop, and probably shouldn't do.) But I kind of like it. A downside is it's slightly less convenient to include episode numbers or air dates or even titles unless I remember them. So I might be less consistent about that, though it feels good to have a fairly straightforward way to look up the episode given my summary. I've skipped all the fiction podcasts I listen to, because that's not why I listen to fiction. Also most interviews, those seem generally hard to summarize, though it would probably be reasonable to extract a few individual points. And one Planet Money episode seemed like it would be irresponsible to summarize carelessly, and I didn't feel like trying to be careful about it. But I've summarized every episode of 99% Invisible and History of English, and all but that one episode of Planet Money, that I've listened to in this time. Also one episode of Rationally Speaking and one of Corecursive. I'm not really sure what this experiment was trying to test. I was curious, so I did it. Some things I've maybe vaguely learned: first, I think I can generally do a decent summary. I frequently get distracted for a few seconds at a time, but ther
45048a46-9f84-49a5-9312-fdaf9a6b16a6
trentmkelly/LessWrong-43k
LessWrong
Sunday 8th 2pm PT: Scott Garrabrant's Office Hours / Cartesian Frames Sunday Nov 8th at 2pm, Scott Garrabrant will be in his Walled Garden office at 2pm to host discussion and questions about Cartesian Frames. There won't be a formal lecture, but stop by to chat, ask questions or explore related ideas. You can find him in the University section of the Garden, also accessible via the Tardis in the central area. http://garden.lesswrong.com?code=PZbv&event=scott-garrabrant-office-hours You can read more about Cartesian Frames here: https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT His office looks like this:
dec5d399-3dbd-4529-8aed-fc5e6c021cca
trentmkelly/LessWrong-43k
LessWrong
LINK: Google research chief: 'Emergent artificial intelligence? Hogwash!' The Register talks to Google's Alfred Spector: Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s, Google has instead taken a modular approach. "We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent." Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity. (Emphasis mine.) I don't have a transcript, but there are videos online. Spector is clearly smart, and apparently he expects an AI to appear in a completely different way than Eliezer does. And he has all the resources and financing he wants, probably 3-4 orders of magnitude over MIRI's. His approach, if workable, also appears safe: it requires human feedback in the loop. What do you guys think?  
f8201c72-0a80-4ad7-994b-f393711f31e1
trentmkelly/LessWrong-43k
LessWrong
[Link] "Improper Nouns" by siderea The post: * siderea | Improper Nouns [p/a/s, Patreon] Interestingly, 'improper nouns' – as described by the post – is itself an example of the phenomena it describes (as 'improper noun' is itself an existing term): > There's a thing you probably have encountered and maybe haven't really registered as it went by. It usually plays out like this. Somebody else takes a noun phrase – usually there's an adjective or two involved, sometimes multiple nouns – that seems perfectly ordinary and makes sense on the face of it, and which refers to something in the real world, and the speaker makes some bold assertion about it that strikes you as really weird, and possibly highly prejudicial. What they've said leaves you thinking, "wait, how could that possibly be true?" > > That's all you get. That's all the sign of what's happening. That's the glitch in the matrix. > > There's a lot of reasons that people make ridiculous claims about all sorts of things, and you will always be tempted to write it off as just ridiculous people saying ridiculous things and thinking no more about it. > > But if you can catch yourself in that moment, and not reflexively dismiss it as silliness, you could, instead, ask yourself: "Hey, is that noun phrase maybe a technical term of art which means something much more specific and possibly quite different from what the individual words literally mean?" > > Doing this? Is a super power. Lots of people can recognize specific technical terms that they are acquainted with. Few people cultivate the more general ability to recognize – or just suspect – that a noun phrase is a technical term they aren't familiar with. This is a useful enough idea that it's already been referenced in another post here.
550a2496-aaf2-4286-878e-df38e9a17e47
trentmkelly/LessWrong-43k
LessWrong
The underappreciated value of original thinking below the frontier When is it more valuable to do original thinking? Probably the most common assumption/implied advice here is that it's pointless to try to come up with anything original until you've consumed the entire field and can work on the frontier of knowledge. I meet this assumption sometimes when I try to do original thinking (I have no credentials), and I see it in other shy but otherwise really bright community members when they decide to hold back from thinking for themselves. This is terrible advice. First, it's a myth that the global frontier of knowledge is the only frontier that matters. Different communities have different frontiers, and if you want to do valuable knowledge-work you should aim to push the frontiers that matter. This can be done either by importing knowledge or by thinking originally, and the latter strategy is sometimes more cost-effective (especially in areas with immense research debt). Model the value of your knowledge-work based on who you think it'll reach. Second, the value of original thinking increases based on how unusual the values you're optimising for are. Imported knowledge, when it exists, is often optimised for something else. A great example here is Holden Karnofsky's History of empowerment and wellbeing. There are thousands of professional historians with more field knowledge than Holden, but because Holden is optimising for something different, he's still able to produce valuable original knowledge-work for our community. This is broadly true of academia where the name of the game is to accumulate prestige. Prestige often involves optimising for being prolific, how difficult the work appears, number of citations, working on fashionable topics, using fashionable methods/concepts, and more pointless things. These incentives are stuck in an inadequate equilibria enforced by journals: you get prestige by publishing to journals and by citing research that's been published there. Unilaterally trying to do something different within t
cb1d12a6-bfd4-4df9-aab0-adc038f6d86f
trentmkelly/LessWrong-43k
LessWrong
Framing Practicum: Bistability This is a framing practicum post. We’ll talk about what bistability is, how to recognize bistability in the wild, and what questions to ask when you find it. Then, we’ll have a challenge to apply the idea. Today’s challenge: come up with 3 examples of bistability which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).  Expected time: ~15-30 minutes at most, including the Bonus Exercise. What’s Bistability? The classic picture of bistability is a marble in a double bowl: The marble has one stable equilibrium on the left, and another on the right. Although the marble has a whole continuum of possible positions, when left to its own devices for a while it will settle down to one of just two positions. My own head-canonical examples of bistability come from digital electronics. One is the signal buffer: it turns a sorta-low voltage (like 1 V) and into an unambiguously low voltage (like 0.01 V), or a sorta-high voltage (like 4 V) into an unambiguously high voltage (like 4.99 V). One stable equilibrium is at 5 V, the other is at 0 V, and all other voltages get pushed toward one of those two. This is crucial to building large digital circuits: without buffering, 5 V would decay to 4 V then 3 V as we pass through one gate after another, and eventually we wouldn’t be able to tell whether a voltage is supposed to be high or low. Another electronic example is the latch, one of the standard low-level memory elements in digital circuits. You can think of a latch sort of like the marble-in-a-double-bowl, but with two extra features: * The state of the marble can be read out. One “bowl” represents “0”, and the other “1”. * An input signal can switch the “marble” from one state to the other. So, to “write” a bit into the memory element, we push the system into the desired “bowl” (i.e. basin of attraction). It then stays in that state indefinitely, and we can read out the stored bit as ma
58b51750-0246-4b2b-88c0-4b04a157f134
trentmkelly/LessWrong-43k
LessWrong
Confessions of a Transhumanist: On Life in Known Space This is a post about my personal history with transhumanism and world saving.
de6c2a9d-b86c-4e18-a17e-fc083d4b7858
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup 02-08-2012 Discussion article for the meetup : West LA Meetup 02-08-2012 WHEN: 08 February 2012 07:00:00PM (-0800) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm - 9:00pm Wednesday, February 8th. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. Parking is free for 3 hours. Discussion Activity: Cognitive Biases Round-robin. Choose at least one item from Wikipedia's List of Cognitive Biases, and read up on it enough to introduce the topic to the group. Then, as a group, we will try to identify real-life situations where the bias would come into play, how it could be harmful, when it doesn't matter (or even helps!), and how to counter or use it. This is also a good time to browse recent posts, especially those that pertain to our activity! Don't worry if you don't have time to read anything, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed. I will bring a whiteboard with Bayes' Theorem written on it. Discussion article for the meetup : West LA Meetup 02-08-2012
1004e9f0-edb8-48b7-9619-1f453110fba6
trentmkelly/LessWrong-43k
LessWrong
PIBBSS Fellowship 2025: Bounties and Cooperative AI Track Announcement We're excited to announce that the PIBBSS Fellowship 2025 now includes a dedicated Cooperative AI track, supporting research that advances our understanding of cooperation in artificial intelligence systems. We are also announcing 300 USD bounties for each referral that becomes a Fellow. Read below for details. What is the Cooperative AI Track? Thanks to the support from the Cooperative AI Foundation, PIBBSS is launching a thematic track within its usual fellowship. This program focuses on research aimed at improving the cooperative intelligence of advanced AI systems. We'll select up to 6 fellows to work on projects that bridge their expertise in cooperation, collective behavior, and multi-agent systems with AI safety research. The fellows will do everything together with the general cohort, with the added research focus on cooperative AI. Research Areas We're particularly interested in are those covered in the Grants page of Cooperative AI Foundation, but we highlight: * Understanding and evaluating cooperation-relevant capabilities * Understanding cooperation-relevant propensities * Multi-agent interactions and emergent behavior * Information asymmetries and transparency * Fundamental research advancing our understanding of cooperation in complex systems Who Should Apply? As is the case with the general track of the Fellowship, we welcome researchers from any background. This track would however likely be a better fit for researchers from fields studying cooperation and collective behavior, including but not limited to: * Game theory and multi-agent systems * Behavioral economics * Network science * Political science (especially coalition formation/cooperation) * Organizational psychology * Anthropology of cooperation * Collective behavior in biological systems Fellowship Details (same as the general track, see here) * Duration: 3 months (June-September 2025) * Location: [Berkeley or London TBC] * Stipend: $3,000/month plus $1,000/month hou
ea880c49-9f1e-485c-a520-718b9e5409a7
trentmkelly/LessWrong-43k
LessWrong
Toronto Meetup
952b39ca-fe06-4778-a776-347daa92e955
trentmkelly/LessWrong-43k
LessWrong
Models of Value of Learning I generally subscribe to the signalling model of higher education: “education” is mostly about signalling how smart and diligent you are, rather than actually learning useful skills. Under this model, higher education doesn’t produce new human capital, it just makes it more obvious who has it already - it’s essentially marketing. That said, it still seems like actually learning things does provide at least some value for at least some people - and personally, I think I’ve gained a lot more value than most out of learning things. A lot of this comes from being strategic: choosing what to learn, and how much time to spend on it, in order to maximize value. In order to do that, we need some idea of the mechanism by which learning things can provide value in the first place. With that in mind, here are five different models for how things we formally learn (i.e. in school/university) can provide value. Skills A CS student learns a few programming languages, basic data structures, some aspects of low-level computer architecture and systems design… skills which are directly useful for real-world software development. Obviously not everything one learns in a CS degree is useful in real-world development, but there is a lot of directly-useful knowledge and skills in there. General model: formal learning provides practical value by teaching how to use a useful tool or perform a useful task. Some examples: * Generic communication skills like legal/scientific/business writing or public speaking * Useful ways to frame problems, like thinking about risk and discounted expected value of potential business projects * When and how to use a physical tool, like a pipette or atomic force microscope * Mathematical/algorithmic tools, like techniques for solving ODEs/PDEs, statistical tests, or data structures * When and how to apply mathematical/algorithmic tools to real-world problems, e.g. turning a physical system into equations in physics or engineering * Most vocational
5c094cc7-7431-4f0c-a057-e3648186618c
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bratislava Meetup IV. Discussion article for the meetup : Bratislava Meetup IV. WHEN: 24 June 2013 06:00:00PM (+0200) WHERE: Malewill Cafe, Uršulínska 9, Bratislava We will see each other at Malewill café. Because of many new members, this time the meetup will be unstructured -- the goal is to meet each other, and talk about us, our goals and expectations from our community. Any questions? Call: +421-908-158110 (Viliam) Stretneme sa v pondelok o šiestej v Malewill café na rohu Primaciálneho námestia, v zadnej nefajčiarskej miestnosti. Keďže minule prišlo veľa nových ľudí a mali sme nabitý program, tentokrát nie je formálny program žiaden, cieľom je zoznámiť sa, porozprávať sa o svojich plánoch, a pouvažovať, čo môžeme spolu dosiahnuť. Discussion article for the meetup : Bratislava Meetup IV.
8b1c577e-4cd0-4a59-9ed9-3bf6bde1cd01
trentmkelly/LessWrong-43k
LessWrong
Forecast With GiveWell Update: The Prize Pool has increased to $3,500!  Thank you to donors Anton Makiievskyi, Ryan Kidd, @Austin, and @Saul Munn, who contributed via Manifund. A total prize pool of $3,500 will be awarded for comments on eligible questions made before January 1, 2025. Get started here. * Ten prizes will be awarded as follows: * 1st place: $700 * 2nd place: $600 * 3rd place: $500 * 4th place: $400 * 5th place: $300 * 6th place through 10th place: $200 each Announcement Post GiveWell’s mission is to find the charities that deliver the greatest impact per dollar spent to help donors decide where to give. To support this, they publish forecasts on expected grant outcomes. Now, Metaculus and GiveWell invite you to forecast and share your reasoning to help GiveWell help others.  Get started and provide feedback on GiveWell's funding decisions. $1,000 commenting prize pool. Deadline is December 1st, 2024.  * How much will GiveWell update its cost-effectiveness estimate of GiveDirectly based on the results of the 2023-2024 follow-up of the Cash Transfers Study by UC Berkeley? * Will the study in Ghana find that RTS,S combined with malaria chemoprevention reduces clinical malaria more than RTS,S alone in the following age groups? * Will the study of supplying eyeglasses to near-vision impaired adults in Kenya and India find a boost to household consumption of the following levels? * Will the World Health Organization prequalify moxidectin before 2027? * Will GiveWell recommend grants of more than $5 million to support delivering chlorine and/or oral rehydration solution and zinc along with vaccine outreach services? * Will the study of oral rehydration solution and zinc distribution in Sierra Leone find a statistically significant increase in ORS/zinc usage by the recipients? * Will the study of chlorine distribution in Sierra Leone find a statistically significant increase in chlorine usage by the recipients? * How many chlorination grants will Gi
17b74bd3-8d2b-49c0-afda-1c6ebacb6b48
trentmkelly/LessWrong-43k
LessWrong
Which are the useful areas of AI study?   I'm stuck wondering on a peculiar question lately - which are the useful areas of AI study? What got me thinking is the opinion occasionally stated (or implied) by Eliezer here that performing general AI research might likely have negative utility, due to indirectly facilitating a chance of unfriendly AI being developed. I've been chewing on the implications of this for quite a while, as acceptance of these arguments would require quite a change in my behavior.   Right now I'm about to start my CompSci PhD studies soon, and had initially planned to focus on unsupervised domain-specific knowledge extraction from the internet, as my current research background is mostly with narrow AI issues in computational linguistics, such as machine-learning, formation of concepts and semantics extraction. However, in the last year my expectations of singularity and existential risks of unfriendly AI have lead me to believe that focusing my efforts on Friendly AI concepts would be a more valuable choice; as a few years of studies in the area would increase the chance of me making some positive contribution later on.   What is your opinion? Do studies of general AI topics and research in the area carry a positive or negative utility ? What are the research topics that would be of use to Friendly AI, but still are narrow and shallow enough to make some measurable progress by a single individual/tiny team in the course of a few years of PhD thesis preparation? Are there specific research areas that should be better avoided until more progress has been made on Friendliness research ? 
bbd427ec-521c-425d-b0b7-9e0ef66184f6
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Inverse scaling can become U-shaped **Edit**: Here's a [great comment by Ethan Perez](https://www.alignmentforum.org/editPost?postId=LvKmjKMvozpdmiQhP&key=7f6ea8c7c618187a899886ef6cc6f8) that caveats this result, that I'd recommend reading for context. This is a paper by folks at Quoc Le's team at Google that examines the winning tasks from [Round 1 of Anthropic's Inverse Scaling Prize](https://www.alignmentforum.org/posts/iznohbCPFkeB9kAJL/inverse-scaling-prize-round-1-winners). They find that 3/4 of the winning tasks — which exhibited *negative* returns-to-scale when tested on LMs up to the scale of [Gopher](https://storage.googleapis.com/deepmind-media/research/language-research/Training%20Gopher.pdf) (280B) — *go back* to exhibiting *positive* returns-to-scale at even greater model sizes such as [PaLM](https://arxiv.org/abs/2204.02311) (540B). **The abstract in full:** > Although scaling language models improves performance on a range of tasks, there are apparently some scenarios where scaling hurts performance. For instance, the Inverse Scaling Prize Round 1 identified four ''inverse scaling'' tasks, for which performance gets worse for larger models. These tasks were evaluated on models of up to 280B parameters, trained up to 500 zettaFLOPs of compute.  >   > > This paper takes a closer look at these four tasks. We evaluate models of up to 540B parameters, trained on five times more compute than those evaluated in the Inverse Scaling Prize. With this increased range of model sizes and training compute, three out of the four tasks exhibit what we call ''U-shaped scaling'' -- performance decreases up to a certain model size, and then increases again up to the largest model evaluated. One hypothesis is that U-shaped scaling occurs when a task comprises a ''true task'' and a ''distractor task''. Medium-size models can do the distractor task, which hurts performance, while only large-enough models can ignore the distractor task and do the true task. The existence of U-shaped scaling implies that inverse scaling may not hold for larger models.  > > > Second, we evaluate the inverse scaling tasks using chain-of-thought (CoT) prompting, in addition to basic prompting without CoT. With CoT prompting, all four tasks show either U-shaped scaling or positive scaling, achieving perfect solve rates on two tasks and several sub-tasks. This suggests that the term "inverse scaling task" is under-specified -- a given task may be inverse scaling for one prompt but positive or U-shaped scaling for a different prompt. > > Key figure from the paper is below, showing results for LMs up to [PaLM 540B](https://arxiv.org/abs/2204.02311). Note that positive scaling resumes for 3/4 of the inverse scaling tasks at the 2.5e24 FLOPs datapoint, which indeed [corresponds](https://www.aitracker.org/) exactly to vanilla PaLM 540B.[[1]](#fn1cz6ferw6y3) ![](http://res.cloudinary.com/lesswrong-2-0/image/upload/v1668363068/mirroredImages/LvKmjKMvozpdmiQhP/kgja1ocifipllna4bsef.png)1. **[^](#fnref1cz6ferw6y3)**From Table 22 in the [PaLM paper](https://arxiv.org/pdf/2204.02311.pdf).
31efd2aa-d9ed-442e-8f66-ffa77486b7b5
trentmkelly/LessWrong-43k
LessWrong
AI Alignment Writing Day Roundup #2 Here are some of the posts from the AI Alignment Forum writing day. Due to the participants writing 34 posts in less than 24 hours (!), I'm re-airing them to let people have a proper chance to read (and comment) on them, in roughly chronological order. 1) Computational Model: Causal Diagrams with Symmetry by Johns Wentworth This post is about representing logic, mathematics, and functions with causal models. > For our purposes, the central idea of embedded agency is to take these black-box systems which we call “agents”, and break open the black boxes to see what’s going on inside. > Causal DAGs with symmetry are how we do this for Turing-computable functions in general. They show the actual cause-and-effect process which computes the result; conceptually they represent the computation rather than a black-box function. I'm new to a lot of this, but to me this seemed like a weird and surprising way to think about math (e.g. the notion that the input n is something that causes the next input n−1). Seems like a very interesting set of ideas to explore. 2) Towards a mechanistic understanding of corrigibility by Evan Hubinger This post builds off of Paul Christiano's post on Worst Case Guarantees. That post claims: > Even if we are very careful about how we deploy ML, we may reach the point where a small number of correlated failures could quickly become catastrophic... I think the long-term safety of ML systems requires being able to rule out this kind of behavior, which I’ll call unacceptable, even for inputs which are extremely rare on the input distribution. Paul then proposes a procedure built around adversarial search, where one part of the system searches for inputs that produce unacceptable outputs in the trained agent, and talks more about how one might build such a system. Evan's post tries to make progress on finding a good notion of acceptable behaviour from an ML system. Paul's post offers two conditions about the ease of choosing an acceptable act
f237593d-c734-4924-9758-e370eadaf852
trentmkelly/LessWrong-43k
LessWrong
Request for proposals for projects in AI alignment that work with deep learning systems As part of our work on reducing potential risks from advanced artificial intelligence, Open Philanthropy is seeking proposals for projects working with deep learning systems that could help us understand and make progress on AI alignment: the problem of creating AI systems more capable than their designers that robustly try to do what their designers intended. We are interested in proposals that fit within certain research directions, described below and given as posts in the rest of this sequence, that we think could contribute to reducing the risks we are most concerned about. Anyone is eligible to apply, including those working in academia, industry, or independently. Applicants are invited to submit proposals for up to $1M in total funding covering up to 2 years. We may invite grantees who do outstanding work to apply for larger and longer grants in the future. Proposals are due January 10, 2022. Submit a proposal here. If you have any questions, please contact ai-alignment-rfp@openphilanthropy.org. Our view of alignment risks from advanced artificial intelligence This section was written by Nick Beckstead and Asya Bergal, and may not be representative of the views of Open Philanthropy as a whole. We think the research directions below would be pursued more fruitfully by researchers who understand our background views about alignment risks from advanced AI systems, and who understand why we think these research directions could help mitigate these risks.  In brief: * We believe it is plausible that later this century, advanced AI systems will do the vast majority of productive labor more cheaply than human workers can. * We are worried about scenarios where AI systems more capable than humans acquire undesirable objectives that make them pursue and maintain power in unintended ways, causing humans to lose most or all influence over the future. * We think it may be technically challenging to create powerful systems that we are highly certain have desi
4ed4360d-5674-40ac-9525-18ffcf570c62
trentmkelly/LessWrong-43k
LessWrong
The unreasonable effectiveness of plasmid sequencing as a service Note: thank you Mark Budde, cofounder and CEO of Plasmidsaurus, and Maria Konovalova, a growth/marketing/talented-person at Plasmidsaurus, for talking to me for this article! Also thank you to Eryney Marrogi, who helped answer some of my dumb questions about plasmids. Introduction Here’s some important context for this essay: it really, really sucks to start a company in biology. Despite billions in funding, the brightest minds the world has to offer, and clear market need, creating an enduring company here feels almost impossible. Some of this has to do with the difficulties of engaging with the world of atoms, some of it has to do with the modern state of enormously expensive clinical trials, and some of it still can be blamed on something else. To some degree, this is an unavoidable facet of this field; working in it means you’re here for the ‘love of the game’ than anything else. But is it necessarily fair to equate all for-profit life-science endeavors with grueling, decade-long struggles to bypass scientific obstacles? Is there a world in which life-sciences startups can have a more traditional tech culture ethos in how they approach things? Unfortunately, probably not for a startup aiming to do the traditional therapeutics play. But if we broaden our scope to companies to include in service provider biotechs, I can offer at least one example: Plasmidsaurus. Plasmidsaurus was started in 2021 and is currently run by Mark Budde. Some historical context: Mark was the founder of a separate, but related company called Primordium Labs, which merged with another separate company called SNPsaurus. They both were largely doing the same thing, so, circa 2022, they agreed to combine underneath the Plasmidsaurus name. People who currently work on the wet-lab side of biotech have likely not only heard of this company, but are also loyal customers. On the flip side, I would guess that not even computational folks at biotech companies have heard of them, much less any
14b41a5d-c4ba-4913-9e4a-67c91c3ab4fe
trentmkelly/LessWrong-43k
LessWrong
Practical Advice Backed By Deep Theories Once upon a time, Seth Roberts took a European vacation and found that he started losing weight while drinking unfamiliar-tasting caloric fruit juices. Now suppose Roberts had not known, and never did know, anything about metabolic set points or flavor-calorie associations—all this high-falutin' scientific experimental research that had been done on rats and occasionally humans. He would have posted to his blog, "Gosh, everyone!  You should try these amazing fruit juices that are making me lose weight!"  And that would have been the end of it.  Some people would have tried it, it would have worked temporarily for some of them (until the flavor-calorie association kicked in) and there never would have been a Shangri-La Diet per se. The existing Shangri-La Diet is visibly incomplete—for some people, like me, it doesn't seem to work, and there is no apparent reason for this or any logic permitting it.  But the reason why as many people have benefited as they have—the reason why there was more than just one more blog post describing a trick that seemed to work for one person and didn't work for anyone else—is that Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist. One of the pieces of advice on OB/LW that was frequently cited as the most important thing learned, was the idea of "the bottom line"—that once a conclusion is written in your mind, it is already true or already false, already wise or already stupid, and no amount of later argument can change that except by changing the conclusion.  And this ties directly into another oft-cited most important thing, which is the idea of "engines of cognition", minds as mapping engines that require evidence as fuel. If I had merely written one more blog post that said, "You know, you really should be more open to changing your mind—it's pretty important—and oh yes, you should pay attention to the evidence too."  And this would not have been as
5fdf36f7-3e72-4a9f-9258-1c30910916f2
trentmkelly/LessWrong-43k
LessWrong
The Argument from Crisis and Pessimism Bias Many people have argued that the public seems to have an overly negative view of society's development. For instance, this survey shows that the British public think that the crime rate has gone up, even though it has gone down. Similarly, Hans Rosling points out that the public has an overly negative view of developing world progress. If we have such a pessimism bias, what might explain it? One standard explanation is that good news isn't news - only bad news is. A murder or a famine is news; their absense isn't. Hence people listening to the news gets a skewed picture of the world. No doubt there is something to that. In this post I want, however, to point to another mechanism that gives rise to a pessimism bias, namely the compound effect of many uses of what I call the Argument from Crisis. (Please notify me if you've seen this idea somewhere else.) The Argument from Crisis says that some social problem - say crime, poverty, inequality, etc - has worsened and that we therefore need to do something about it. This way of arguing is effective primarily because we are loss averse - because we think losing is worse than failing to win. By arguing that inequality was not as bad ten years ago and that we have now "lost" some degree of equality, your argument will be rhetorically stronger. The reason is that in that case more equality will eradicate a loss, whereas if inequality hasn't worsened, more equality will simply be a gain, which we value less. Hence we will be more inclined to act against inequality in the former case. Even though the distinction between a gain and an eradication of a loss is important from a rhetorical point of view, it does not seem very relevant from a logical point of view. Whatever the level of crime or inequality is, it would seem that the value of reducing it is the same regardless of whether it has gone up or down the past ten years. Another reason for why the Argument from Crisis is rhetorically effective is of course that we beli
673622c8-97eb-4a82-b83b-e789fc86d210
trentmkelly/LessWrong-43k
LessWrong
Moral differences in mediocristan Scott Alexander writes: > Utilitarianism agrees that we should give to charity and shouldn’t steal from the poor, because Utility, but take it far enough to the tails and we should tile the universe with rats on heroin. Religious morality agrees that we should give to charity and shouldn’t steal from the poor, because God, but take it far enough to the tails and we should spend all our time in giant cubes made of semiprecious stones singing songs of praise. He suggests that these are surprisingly divergent visions of the highest good, for moral visions that give similar advice for day-to-day life: > converting the mass of the universe into nervous tissue experiencing euphoria isn’t just the second-best outcome from a religious perspective, it’s completely abominable But what strikes me about them is how similar they seem, when you strip away the decorative metaphors. First of all, in both cases you can afford many more instances of the best thing if you simulate it than make a real one. So in both cases the universe is better converted into computronium than actual rats or heavenly choirs - the substitution of "nervous tissue experiencing euphoria" for "rats on heroin" is an implied acknowledgement of this. Nor should we imagine that religion eschews such optimization. Many religions promote asceticism, which allows more humans to subsist and praise God on the same resources. Moreover, the Bible urges that we be fruitful and multiply. But also, it's not at all clear that an imagined end state of constant songs of praise is meaningfully different from the hedonic-utilitarian state. After all, of what interest is a song of praise if it is not an expression of genuine appreciation? And, couldn't you - by economizing on the actual churches, or all the parts of the mind not engaged in songs of praise - make your religious heaven more efficient and therefore have more of it? (Anyone familiar with Dante's Paradiso will have recognized a vision of heaven - where we w
0eaceeee-eaba-4976-97c7-27a00bb9ea8f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Building brain-inspired AGI is infinitely easier than understanding the brain *Epistemic status: Trying to explain why I have certain intuitions. Not sure whether people will find this obvious vs controversial.* Part 1: Brains probably do some useful things in utterly inscrutable ways ========================================================================= I'm not so much interested in arguing the strong claim that the brain does some useful things in *infinitely* inscrutable ways—i.e., that understanding them is fundamentally impossible. I merely want to make the weaker claim that the brain probably does some useful things in ways that are *for all intents and purposes* inscrutable. Where did I get this intuition? A few places: * Evolved FPGA circuits - see the awesome blog post [On the Origin of Circuits](https://www.damninteresting.com/on-the-origin-of-circuits/) focusing on the classic [1996 paper by Adrian Thompson](https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9691). An evolved circuit of just 37 logic gates managed to perform a function which kinda seems impossible with those components. It turned out that the components were used in weird ways—the circuit ran differently on nominally-identical FPGAs, the transistors were not all used as on/off switches, there was some electromagnetic coupling or power-line coupling going on, etc. Can we understand how this circuit works? In the paper, they didn't try. I imagine that a good physicist, given enough time and experimental data, could get at least a vague idea of the most important aspects. But there might be subtleties that can't really be explained better than a simulation, or maybe some component has 17 unrelated functions that occur at different parts of the cycle, or maybe you need to account for a microscopic bump in some wire, or whatever. If it were 370 components instead of 37, and there were limits on what you can measure experimentally, it would be that much harder. * The [Busy Beaver Function](https://en.wikipedia.org/wiki/Busy_beaver) Σ(n) is unknown for as low as n=5. So we have a bunch of really simple computer programs, and no one knows whether they run forever or halt. When you get to larger n it gets even worse: For n≥1919 (and perhaps much smaller n too), Σ(n) is formally undecidable. While [that's not exactly the same as saying that we will never understand these programs](https://math.stackexchange.com/questions/261657/g%c3%b6dels-incompleteness-theorem-meta-reasoning-loophole), I kinda expect that there *are* in fact programs whose asymptotic behavior really is "infinitely inscrutable", i.e. programs which don't halt, but where there is fundamentally no way to understand why they don't halt, short of actually running them forever, and that's true even if you have a brain the size of Jupiter. (I could be wrong, and this is not an important part of my argument.) * Riemann hypothesis: We have a simple-to-define function that exhibits an obvious pattern of behavior. Like those busy beaver Turing machines, the answer to "why" is "I dunno, we ran the calculation, and that's what we've found, at least so far". In this case, I assume that an explanation probably exists, but I find it interesting that we haven't discovered it yet, after 150 years of intense effort. In summary, my intuition is that: 1. Simple components can give rise to recognizable emergent patterns of behavior for inscrutably complicated reasons that can't necessarily be distilled down to any "explanation" beyond "we simulated it and that's what happens", and 2. Neurons are *not* simple components, in that even if they have a legible primary input-output function, they probably have dozens of "side-channel" input-output functions that probably get sporadically used by evolution as well. (If you tug on a dendrite, then it's a spring!)[[1]](#fno4vez7gut8) These two considerations coalesce to give me a prior expectation that there may be large numbers of *very* deep rabbit holes when you try to work out low-level implementation details of how the brain does any particular thing. The brain might do that thing by a beautiful, elegant, simple design ... or it might do that thing in some bizarre, ridiculous way, which we will not understand except by looking in weird places, like measuring mechanical stresses on cell membranes, or by measuring flows of chemicals that by all accounts ought to have no relation whatsoever to neuron firing, or by simulating systems of 492 components which interact in a complicated way that can't really be boiled down into anything simpler. The book *The Idea of the Brain* has some great examples of the horrors facing neuroscientists trying to understand seemingly-simple neural circuits: > …Despite having a clearly established connectome of the thirty-odd neurons involved in what is called the crustacean stomatogastric ganglion, Marder's group cannot yet fully explain how even some small portions of this system function. ...in 1980 the neuroscientist Allen Selverston published a much-discussed think piece entitled "Are Central Pattern Generators Understandable?"...the situation has merely become more complex in the last four decades...The same neuron in different [individuals] can also show very different patterns of activity—the characteristics of each neuron can be highly plastic, as the cell changes its composition and function over time... > > …Decades of work on the connectome of the few dozen neurons that form the central pattern generator in the lobster stomatogastric system, using electrophysiology, cell biology and extensive computer modelling, have still not fully revealed how its limited functions emerge. > > Even the function of circuits like [frog] bug-detecting retinal cells—a simple, well-understood set of neurons with an apparently intuitive function—is not fully understood at a computational level. There are two competing models that explain what the cells are doing and how they are interconnected (one is based on a weevil, the other on a rabbit); their supporters have been thrashing it out for over half a century, and the issue is still unresolved. In 2017 the connectome of a neural substrate for detecting motion in *Drosophila* was reported, including information about which synapses were excitatory and which were inhibitory. Even this did not resolve the issue of which of those two models is correct. > > I haven't chased down these references, and can't verify that understanding these things is really as difficult as this author says. On the other hand, these are *really really simple systems*; if they're even remotely approaching the limits of our capabilities, imagine an interacting bundle of 10× or 100× more neurons, doing something more complicated, in a way that is harder to experimentally measure. So anyway, maybe scientists will eventually understand how the brain does absolutely everything it does, at the “implementation level”. I don't think that's ruled out. But I sure don't think it's likely, even for the simplest worm nervous system, in the foreseeable future. Part 2: …But that doesn't mean brain-inspired AGI is hard! ========================================================== **Side note 1:** I use "brain-inspired AGI" in the sense of copying (or reinventing) high-level data structures and algorithms, not in the sense of copying low-level implementation details, e.g. neurons that spike. "Neuromorphic hardware" is a thing, but I see no sign that neuromorphic hardware will be relevant for AGI. Most neuromorphic hardware researchers are focused on low-power sensors, as far as I understand. **Side note 2:** The claim “brain-inspired AGI is likely” is unrelated to the claim “brain-inspired AGI will bring about a better future for humankind than other types of AGIs”, although these two claims sometimes get intuitively bundled together under the heading of "cheerleading for brain-like AGI". I have grown increasingly sympathetic to the former claim, but am undecided about the latter claim, and see it as an open research question—indeed, a *particularly urgent* open question, as it informs [high-leverage research prioritization decisions that we can act on immediately](https://www.alignmentforum.org/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why). **OK, back to the main text.** I want to argue something like this: ***If some circuit in the brain is doing something useful, then it's humanly feasible to understand what that thing is and why it's useful, and to write our own CPU code that does the same useful thing.*** In other words, the brain's implementation of that thing can be super-complicated, but the input-output relation cannot be that complicated—at least, the *useful* part of the input-output relation cannot be that complicated. The crustacean stomatogastric ganglion central pattern generators discussed above are a great example: their *mechanisms* are horrifically complicated, but their *function* is simple: they create a rhythmic oscillation. Hey, you need a rhythmic oscillation in your AGI? No problem! I can do that in one line of Python. At the end of the day, we survive by exploiting regularities in our ecological niche and environment. If the brain does something that's useful, I feel like there has to be a legible explanation in those terms; and from that, that there has to be legible CPU code that does the same thing. I feel most strongly about the boldface statement above in regards to the neocortex. The neocortex is a big [uniform-ish](https://www.lesswrong.com/posts/NkSpukDkm9pjRdMdB/human-instincts-symbol-grounding-and-the-blank-slate) machine that learns patterns in inputs and outputs and rewards, builds a predictive model, and uses that model to choose outputs that increase rewards, using some techniques we already understand, and others we don’t. If the neocortex does some information-processing thing, and the result is that it does its job better, then I feel like there has to be some legible explanation for what it's doing, why, and how, in terms of that primary prediction-and-action task … there has to be some reason that it systematically helps run smarter searches, or generates better models, or makes more accurate predictions, etc. I feel much *less* strongly about that above boldface statement in regards to the brainstem and hypothalamus (the home of evolved instinctive responses to different situations, I would argue, see [here](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and)). For example, I can definitely imagine that the human brain has an instinctual response to a certain input which is adaptive in 500 different scenarios that ancestral humans typically encountered, and maladaptive in another 499 scenarios that ancestral humans typically encountered. So on average it's beneficial, and our brains evolved to have that instinct, but there's no tidy story about why that instinct is there and no simple specification for exactly what calculation it's doing. By the same token, in this sense, I expect that understanding the key operating principles of human intelligence will be dramatically easier than understanding the key operating principles of the nervous system of a 100-neuron microscopic worm!! Weird thought, right?! But again, every little aspect of those worm neurons could be a random side-effect of something else, or it could be an adaptive strategy for some situation that comes up in the worm's environment once every 5 generations, and how on earth are you ever going to figure out which is which?? And if you can't figure out which is which, how can you hope to “understand” the system in any way besides running a molecule-by-molecule simulation?? By contrast, “human intelligence” is a specific suite of capabilities, including things like “can carry on conversations, invent new technology, etc.”—a known target to aim for. (*Added for clarification:* The point of the previous paragraph is that “understanding how a nervous system gives rise to a particular identifiable set of behaviors” is tractable, whereas “understanding the entire design spec of a nervous system”—i.e., every way that it optimizes inclusive genetic fitness—is *not* tractable. And I'm saying that this is *such* a big factor that it outweighs even the many-orders-of-magnitude difference in complexity between microscopic worms' and humans' nervous systems.) Conclusions =========== I guess I have a not-terribly-justified gut feeling that we *already* vaguely understand how neocortical algorithms work to create human intelligence, and that “soon” (few decades?) this vague understanding will develop into full-fledged AGIs, assuming that the associated R&D continues. On the other hand, I acknowledge that this is very much *not* a common view, including among people far more knowledgeable than myself, and in particular there are plenty of neuroscientists who view the project of understanding the human brain as a centuries-long endeavor. I guess this post is a little piece of how I reconcile those two facts: At least in some cases, when neuroscientists talk about understanding the brain, I think they mean understanding what all the calculations are and how they are implemented—like what those researchers have been trying and failing to do with the crustacean stomatogastric ganglion in that book quote from part 1 above—but for a human brain with 10⁹× more neurons. Yup, that sounds like a centuries-long endeavor to me too! But I think understanding human intelligence well enough to make a working AGI algorithm is *dramatically* easier than that. (*Update*: See further discussion in my later post series, Sections [2.8](https://www.lesswrong.com/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in#2_8_Timelines_to_brain_like_AGI_part_1_3__how_hard_will_it_be_to_reverse_engineer_the_learning_from_scratch_parts_of_the_brain__well_enough_for_AGI_), [3.7](https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and#3_7_Timelines_to_brain_like_AGI_part_2_of_3__how_hard_will_it_be_to_reverse_engineer_the_Steering_Subsystem_well_enough_for_AGI_), and [3.8](https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and#3_8_Timelines_to_brain_like_AGI_part_3_of_3__scaling__debugging__training__etc_).) …And I *do* think that latter type of work is actually getting done, particularly by those researchers who go in armed with an understanding of (1) what useful algorithms might look like in general, (2) neuroscience, and (3) psychology / behavior, and then go hunting for ways that those three ingredients might come together, without getting too bogged down in explaining every last neuron spike. 1. **[^](#fnrefo4vez7gut8)**Incidentally, this is also the lens through which I think about the arguments over whether or not glial cells (in addition to neurons) do computations. If glial cells are predictable systems that interact with neurons, *of course* they'll wind up getting entrained in computations! That's what evolution does, just like an evolved PCB circuit would probably use the board itself as a mechanical resonator or whatever other ridiculous things you can imagine. So my generic expectation is: (1) If you removed the glial cells, it would break lots of brain computations; (2) If there were no such thing as glial cells, a functionally-identical circuit would have evolved, and I bet it wouldn't even look all that different. By the way, I know almost nothing about glial cells, I'm just speculating. :-)
5772f57a-f8ae-422c-acd9-3e7ee53fcaa1
trentmkelly/LessWrong-43k
LessWrong
School Has It Backwards Life isn't about getting to the right answers; it's about discovering the right questions. Crossposted from Curious Human. ---------------------------------------- In school, we're constantly being asked questions and instantly rewarded when we find the right answers. We're praised in class for saying "25" when we're asked what "5x5" is. We're given perfect scores on tests when we answer every question correctly. We get good grades for repeatedly getting good scores. Students who give the right answers when asked are "good students". We're also instantly punished when we propose the wrong answers. We get scolded, lose points, and get poor grades. Students who don't give the right answers are "bad students". For 16+ years (12 grades + 4 years of higher education) we're literally conditioned, in artificial learning environments, to find the right answers to other people's questions. How often do students finish a paper assigned to them in a class and want to write more on the subject? Rarely, if ever. Because the questions - the essay prompts - were never truly ours in the first place. It's the same for every subject. Take the way math is taught in schools. General concepts are broken down into extremely specific questions. Students are then given a list of methods to answer each specific type of question, which they memorize for the sole purpose of using them to quickly find answers on tests. This process only makes sense for basic, fundamental concepts (times tables, finding the areas of common shapes, etc.); anything more complex and it quickly becomes unnecessarily complicated. Even worse, it detaches math from reality and ruins its purpose, which is to allow us to take first principles, generalize them, and apply them to as many real-world cases as possible. We weren't designed to learn this way. Have you ever seen a child learn something new? They're playing, and suddenly something sparks their interest. They come up with a question. You see their eyes li
71ac4111-682e-4bff-b29b-05cde016f59a
trentmkelly/LessWrong-43k
LessWrong
Against most, but not all, AI risk analogies I personally dislike most AI risk analogies that I've seen people use. While I think analogies can be helpful for explaining concepts to people and illustrating mental pictures, I think they are frequently misused, and often harmful. At the root of the problem is that analogies are consistently mistaken for, and often deliberately intended as arguments for particular AI risk positions. And a large fraction of the time[1] when analogies are used this way, I think they are misleading and imprecise, routinely conveying the false impression of a specific, credible model of AI, even when no such credible model exists. Here is a random list of examples of analogies that I found in the context of AI risk (note that I'm not saying these are bad in every context): 1. Stuart Russell: "It’s not exactly like inviting a superior alien species to come and be our slaves forever, but it’s sort of like that." 2. Rob Wiblin: "It’s a little bit like trying to understand how octopuses are going to think or how they’ll behave — except that octopuses don’t exist yet, and all we get to do is study their ancestors, the sea snail, and then we have to figure out from that what’s it like to be an octopus." 3. Eliezer Yudkowsky: "The character this AI plays is not the AI. The AI is an unseen actress who, for now, is playing this character. This potentially backfires if the AI gets smarter." 4. Nate Soares: "My guess for how AI progress goes is that at some point, some team gets an AI that starts generalizing sufficiently well, sufficiently far outside of its training distribution, that it can gain mastery of fields like physics, bioengineering, and psychology [...] And in the same stroke that its capabilities leap forward, its alignment properties are revealed to be shallow, and to fail to generalize. The central analogy here is that optimizing apes for inclusive genetic fitness (IGF) doesn't make the resulting humans optimize mentally for IGF." 5. Norbert Wiener: "when a machine constr
ad186004-aaf9-4322-bbb5-6eea1cf75b77
trentmkelly/LessWrong-43k
LessWrong
Absolutely Winning Bridge Hands In bridge the best you can do is commit to take all the cards and then follow through. Even better if you do this without the help of a trump suit. Playing today I was curious: what fraction of hands are "absolutely winning" for 7NT, in that they'll let you make this contract no matter what your partner or opponents happen to hold? Here's one example hand: * Clubs: A, K, Q * Diamonds: A, K, Q * Hearts: A, K, Q * Spades: A, K, Q, J And here's a very different example: * Clubs: A * Diamonds: A * Hearts: A * Spades: A, K, Q, J, 10, 9, 8, 7, 6, 5 Any absolutely winning 7NT hand needs to have all the aces, so that whatever your opponent leads you can take the first trick. Then all the other cards need to be "good", in that no other player can have a higher card in their suit. Since a card being "good" requires that you also have every higher card in the same suit, all that matters for each suit is its length. If I tell you that an absolutely winning hand has four spades, you know they are A, K, Q, J. What we're doing, then, is assigning positive lengths to suits, where all lengths must sum to 13. But it's simpler if we ignore the aces, since you have to have all four, and assign non-negative lengths that add to 9. How many ways can we do this? Since I'm more of a programmer than a mathematician, here's a way to solve this with code: print(len([ (c, d, h, s) for c in range(10) for d in range(10 - c) for h in range(10 - c - d) for s in range(10 - c - d - h)])) Which gives 715. [1] There are 52-choose-13 bridge hands (about 635B) so your chances of getting an absolutely winning 7NT hand is 715 in 635B or just a bit better than one in a billion. While very unlikely, this is in the "it could happen, maybe" range and not all the way to "no way, unless the shuffle was rigged". EDIT: the above is wrong in two ways. First, the number of spades is completely determined by the number of the other three suits, and so for s in range(10 - c - d - h
5d1e1d0c-05c6-440d-8dfc-47298f6d2416
trentmkelly/LessWrong-43k
LessWrong
State of the Solstice (NYC Event upcoming) (For the time being, I'm posting this in both Main and in Discussion, to increase visibility. If it ends up getting promoted I may remove this one if people think I should) > Every year people mock the Rationalist Solstice Ritual. And every year I read the discussions, nod, and then go to New York and have a great time and meet amazing people and have fascinating discussions and get the songs stuck in my head for the next couple of weeks. > > (“Stop enjoying yourself! Your enjoyment is wrong!”) > > I think the rationalist movement has been especially blessed. Without really meaning to, and without having to deliberately believe cringeworthy things for the heck of it or unfairly attack others, we have managed to develop enough different ways of thinking that we naturally have a very strong in-group distinction – which in turn means a very strong community. > > Less Wrong, More Rite, Part II - Slatestarcodex Yesterday, the Bay Area and Boston each held a Solstice event. This upcoming weekend, the Seattle community will be putting on an event, and in New York City, we'll have a huge, flagship event with professional music throughout. (Details at the end of the post. If you'd like to come meet other rationalists but aren't into singing, there'll be a concurrent party hosted downstairs, turning into an official after party around 9:30pm) The Bay Area folk were able to put together a streaming video. You can check it out here. It requires a little emotional investment to experience it through a computer monitor, but if you're willing to make that investment, I think it pays off well. I actually found myself singing along despite being alone in a room, and feeling like a part of the crowd. I also found myself clapping when several of the speakers delivered some excellent stories. (Your mileage may vary, of course.) Watching the Bay Area video was a unique experience for me - quite possibly the proudest moment of my life. For the first few years, Solstice felt like
e03cdc3e-7ee1-4e86-b032-25a55d392750
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Locality of goals Introduction ============ Studying goal-directedness produces two kinds of questions: questions about goals, and questions about being directed towards a goal. Most of my previous posts focused on the second kind; this one shifts to the first kind. Assume some goal-directed system with a known goal. The nature of this goal will influence which issues of safety the system might have. If the goal focuses on the input, the system might [wirehead](https://wiki.lesswrong.com/wiki/Wireheading) itself and/or [game its specification](https://www.lesswrong.com/posts/7b2RJJQ76hjZwarnj/specification-gaming-the-flip-side-of-ai-ingenuity). On the other hand, if the goal lies firmly in the environment, the system might have [convergent instrumental subgoals](https://wiki.lesswrong.com/wiki/Basic_AI_drives) and/or destroy [any unspecified value](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile). Locality aims at capturing this distinction. Intuitively, the locality of the system's goal captures how far away from the system one must look to check the accomplishment of the goal. Let's give some examples: * The goal of "My sensor reaches the number 23" is very local, probably maximally local. * The goal of "Maintain the temperature of the room at 23 °C" is less local, but still focused on a close neighborhood of the system. * The goal of "No death from cancer in the whole world" is even less local. Locality isn't about how the system extract a model of the world from its input, but about whether and how much it cares about the world beyond it. Starting points =============== This intuition about locality came from the collision of two different classification of goals: the first from from Daniel Dennett and the second from Evan Hubinger. Thermostats and Goals --------------------- In "The Intentional Stance", Dennett explains, extends and defends... the [intentional stance](https://en.wikipedia.org/wiki/Intentional_stance). One point he discusses is his liberalism: he is completely comfortable with admitting ridiculously simple systems like thermostats in the club of intentional systems -- to give them meaningful mental states about beliefs, desires and goals. Lest we readers feel insulted at the comparison, Dennett nonetheless admits that the goals of a thermostat differ from ours. > Going along with the gag, we might agree to grant [the thermostat] the capacity for about half a dozen different beliefs and fewer desires—it can believe the room is too cold or too hot, that the boiler is on or off, and that if it wants the room warmer it should turn on the boiler, and so forth. But surely this is imputing too much to the thermostat; it has no concept of heat or of a boiler, for instance. So suppose we de-interpret its beliefs and desires: it can believe the A is too F or G, and if it wants the A to be more F it should do K, and so forth. After all, by attaching the thermostatic control mechanism to different input and output devices, it could be made to regulate the amount of water in a tank, or the speed of a train, for instance. The goals and beliefs of a thermostat are thus not about heat and the room it is in, as our anthropomorphic bias might suggest, but about the binary state of its sensor. Now, if the thermostat had more information about the world -- a camera, GPS position, general reasoning ability to infer information about the actual temperature from all its inputs --, then Dennett argues its beliefs and goals would be much more related to heat in the room. > The more of this we add, the less amenable our device becomes to serving as the control structure of anything other than a room-temperature maintenance system. A more formal way of saying this is that the class of indistinguishably satisfactory models of the formal system embodied in its internal states gets smaller and smaller as we add such complexities; the more we add, the richer or more demanding or specific the semantics of the system, until eventually we reach systems for which a unique semantic interpretation is practically (but never in principle) dictated (cf. Hayes 1979). At that point we say this device (or animal or person) has beliefs about heat and about this very room, and so forth, not only because of the system's actual location in, and operations on, the world, but because we cannot imagine an-other niche in which it could be placed where it would work. Humans, Dennett argues, are more like this enhanced thermostat, in that our beliefs and goals intertwine with the state of the world. Or put differently, when the world around us changes, it will influence almost always influence our mental states; whereas a basic thermostat might react the exact same way in vastly different environments. > But as systems become perceptually richer and behaviorally more versatile, it becomes harder and harder to make substitutions in the actual links of the system to the world without changing the organization of the system itself. If you change its environment, it will notice, in effect, and make a change in its internal state in response. There comes to be a two-way constraint of growing specificity between the device and the environment. Fix the device in any one state and it demands a very specific environment in which to operate properly (you can no longer switch it easily from regulating temperature to regulating speed or anything else); but at the same time, if you do not fix the state it is in, but just plonk it down in a changed environment, its sensory attachments will be sensitive and discriminative enough to respond appropriately to the change, driving the system into a new state, in which it will operate effectively in the new environment. Part of this distinction between goals comes from generalization, a property considered necessary for goal-directedness since Rohin's [initial post](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma#Generalization_to_novel_circumstances) on the subject. But the two goals also differs in their "groundedness": the thermostat's goal lies completely in its sensors' inputs, whereas the goals of humans depend on things farther away, on the environment itself. That is, these two goals have different locality. Goals Across Cartesian Boundaries --------------------------------- The other classification of goals comes from Evan Hubinger, in a personal discussion. Assuming a [Cartesian Boundary](https://www.lesswrong.com/posts/JasCkaPtZEJsYDX8H/cartesian-boundary-as-abstraction-boundary) outlining the system and its inputs and outputs, goals can be functions of: * **The environment**. This includes most human goals, since we tend to refuse wireheading. Hence the goal depends on something else than our brain state. * **The input**. A typical goal as a function of the input is the one ascribed to the simple thermostat: maintaining the number given by its sensor above some threshold. If we look at the thermostat without assuming that its goal is a proxy for something else, then this system would happily wirehead itself, as the goal IS the input. * **The output**. This one is a bit weirder, but captures goals about actions: for example, the goal of twitching. If there is a robot that only twitches, not even trying to keep twitching, just twitching, its goal seems about its output only. * **The internals**. Lastly, goals can depend on what happens inside the system. For example, a very depressed person might have the goal of "Feeling good". If that is the only thing that matters, then it is a goal about their internal state, and nothing else. Of course, many goals are functions of multiple parts of this quatuor. Yet separating them allows a characterization of a given goal through their proportions. Going back to Dennett's example, the basic thermostat's goal is a function of its input, while human goals tend to be functions of the environment. And once again, an important aspect of the difference appears to lie in how far from the system is there information relevant to the goal -- locality. What Is Locality Anyway? ======================== Assuming some model of the world (possibly a causal DAG) containing the system, the locality of the goal is inversely proportional to the minimum radius of a ball, centered at the system, which suffice to evaluate the goal. Basically, one needs to look a certain distance away to check whether one’s goal is accomplished; locality is a measure of this distance. The more local a goal, the less grounded in the environment, and the most it is susceptible to wireheading or change of environment without change of internal state. Running with this attempt at formalization, a couple of interesting point follow: * If the model of the world includes time, then locality also captures how far in the future and in the past one must go to evaluate the goal. This is basically the short-sightedness of a goal, as exemplified by variants of twitching robots: the robot that simply twitches; the one that want to maximize its twitch in the next second; the one that want to maximize its twitching in the next 2 seconds,... up to the robot that want to maximize the time it twitches in the future. * Despite the previous point, locality differs from the short term/long term split. An example of a short-term goal (or one-shot goal) is wanting an ice cream: after its accomplishment, the goal simply dissolves. Whereas an example of a long-term goal (or continuous goal) is to bring about and maintaing world peace -- something that is never over, but instead constrains the shape of the whole future. Short-sightedness differs from short-term, as a short-sighted goal can be long-term: "for all times t (in hours to simplify), I need to eat an ice cream in the interval [t-4,t+4]". * Where we put the center of the ball inside the system is probably irrelevant, as the classes of locality should matter more than the exact distance. * An alternative definition would be to allow the center of the ball to be anywhere in the world, and make locality inversely proportional to the sum of the distance of the center to the system plus the radius. This captures goals that do not depend on the state of the system, but would give similar numbers than the initial definition. In summary, locality is a measure of the distance at which information about the world matters for a system's goal. It appears in various guises in different classification of goals, and underlies multiple safety issues. What I give is far from a formalization; it is instead a first exploration of the concept, with open directions to boot. Yet I believe that the concept can be put into more formal terms, and that such a measure of locality captures a fundamental aspect of goal-directedness. *Thanks to Victoria Krakovna, Evan Hubinger and Michele Campolo for discussions on this idea.*
bd07fff2-ab4e-43ec-a58d-d273c30b105b
trentmkelly/LessWrong-43k
LessWrong
Intake Rumble Spew Too much reading. Eyes unfocused and tired Too little writing. Fingers still and ready Libraries of words. Burst Across the fibre. Pulse Down the copper. Flux. Signal demuxed at the demarc Thus the Internet was routed The LAN shouts, packets whorl and twirl Dancing into my desktop's NIC 96 Firefox tabs later Where has my time gone? What did I learn? Why? Right...Write. Write!
6a96dfde-dc0a-4678-98c0-0ba34de43792
trentmkelly/LessWrong-43k
LessWrong
Not exactly the trolley problem An unusual incident. Are you obligated to be on the side of the plane with the crocodile if the other passengers are overbalancing the plane? To push other passengers over to the side with the crocodile?
f9bea024-6fc1-49c0-a4ee-b7d8d32a7d3a
trentmkelly/LessWrong-43k
LessWrong
[Link] Did AlphaStar just click faster? This is a linkpost for: https://medium.com/@aleksipietikinen/an-analysis-on-how-deepminds-starcraft-2-ai-s-superhuman-speed-could-be-a-band-aid-fix-for-the-1702fb8344d6. tl;dr: AlphaStar clicked at a rate of 1000+ Actions Per Minute for five second periods, and a rate 1500+ APM for fractions of a second. The fastest human player can't sustain anything above 500 APM for more than a second or two. Did AlphaStar just spam click its way to victory?
44ea7a3e-3281-4415-941d-63bc400e989f
trentmkelly/LessWrong-43k
LessWrong
A taxonomy of Cruxes [Crossposted to Musings and Rough Drafts] This is a quick theoretical post. I have little idea if this is interesting to others. Plus this is super finicky to read, but I don't know what to do about that. In this post, I want to outline a few distinctions between different kinds of cruxes. Sometimes folks will find what seems to be a crux, but they feel some confusion, because it seems like it doesn’t fit the pattern that they’re familiar with, or it seems off somehow. Often this is because they’re familiar with one half of a dichotomy, but not the other. Conjunctive, unitary, and disjunctive cruxes As the Double Crux method is typically presented, double cruxes are described as single propositions, about which, if you changed your mind, you would change your mind about another belief. But as people often ask, > What if there are two propositions, B and C, and I wouldn’t change my mind about A, if I just changed my mind about B or if I just changed my mind about C? I would my mind about A, only if I shift on both B and C. This is totally fine. In this situation would would just say that your crux for A is a conjunctive crux of B and C. In fact, this is pretty common, because people often have more than one concern in any given situation. Some examples: * Someone is thinking about quitting their job to start a business, but they will only pull the trigger if a) they thought that their new work would actually be more fulfilling for them, and b) they know that their family won’t suffer financial hardship. * A person is not interested in signing up for cryonics, but offers that they would if a) it was inexpensive (on the order of $50 a month and b) if the people associated with cryonics were the sort of people that he wanted to be identified with. [These are the stated cruxes of a real person that I had this discussion with.] * A person would go vegetarian if, a) they were sure it was healthy for them and b) doing so would actually reduce animal suffering (
cd54f8ee-ed2f-4ac7-a2d2-d22deb61c077
trentmkelly/LessWrong-43k
LessWrong
Weird characters in the Sequences When the sequences were copied from Overcoming Bias to Less Wrong, it looks like something went very wrong with the character encoding.  I found the following sequences of HTML entities in words in the sequences:   ’ê d?tre Å« M?lamadhyamaka ĂŚ Ph?drus — arbitrator?i window?and ĂŞ b?te m?me … over?and รก H?jek ĂƒÂź G?nther ĂŠ fianc?e proteg?s d?formation d?colletage am?ricaine d?sir ĂƒÂŻ na?ve na?vely ō sh?nen ö Schr?dinger L?b ยง ?ion ĂƒÂś Schr?dinger H?lldobler Ăź D?sseldorf G?nther – ? Church? miracles?in Church?Turing ’ doesn?t he?s what?s let?s twin?s aren?t I?ll they?d ?s you?ve else?s EY?s Whate?er punish?d There?s Caledonian?s isn?t harm?s attack?d I?m that?s Google?s arguer?s Pascal?s don?t shouldn?t can?t form?d controll?d Schiller?s object?s They?re whatever?s everybody?s That?s Tetlock?s S?il it?s one?s didn?t Don?t Aslan?s we?ve We?ve Superman?s clamour?d America?s Everybody?s people?s you?d It?s state?s Harvey?s Let?s there?s Einstein?s won?t ĂĄ Alm?si Zolt?n ĂŤ pre?mpting re?valuate ≠ ? è l?se m?ne accurs?d รฐ Ver?andi → high?low low?high ’ doesn?t ā k?rik Siddh?rtha รถ Sj?berg G?delian L?b Schr?dinger G?gel G?del co?rdinate W?hler K?nigsberg P?lzl ĂŻ na?vet   I?understood ? I?was Ăś Schr?dinger ĂŽ pla?t úñ N?ez Ĺ‚ Ceg?owski — PEOPLE?and smarter?supporting to?at problem?and probability?then valid?to opportunity?of time?in true?I view?wishing Kyi?and ones?such crudely?model stupid?which that?l
c4d791cb-3187-4695-acf4-b648f5ecc1c8
trentmkelly/LessWrong-43k
LessWrong
GPT-3 Catching Fish in Morse Code Mostly non-serious and slightly silly, with some potentially interesting bits for people who are into language models.  TLDR: The current version of GPT-3 has a strong tendency to encode mangled versions of a specific phrase when asked to write morse code in zero-shot situations. This is possibly the result of a previous version of the model using essentially a single phrase for all morse code writing, which the newer version then learnt to modify.  All completions done with text-davinci-002 (~GPT-Instruct-175B) at zero temperature and with no examples unless stated otherwise. All models used are GPT-Instruct series. The Basics GPT-3 'knows' morse code in a rudimentary sense. It can accurately regurgitate both the encodings of the entire alphabet and of individual letters, but it's not so great at translating words: Morse code is a letter-by-letter encoding, and since GPT sees tokens, it's not all that surprising that the jump from single letters to words might be bigger for GPT than for humans. Tokenizer Token IDs What is surprising is that GPT morse is often much longer than the original word, and quite specific.   Fiddling with Tokens Let's see what happens if we try and make the tokenisation a bit nicer for GPT.  Adding a space doesn't seem to help much. ("n" is tokenised differently to " n" so not too surprising). We also get a similarly weird output with this too.  Target PhraseGPT TranslatedGPT MorseCorrect Morse"i n"I CAUGHT THE.. / -.-. .- ..- --. .... - / - .... ... / -.   Separating the tokens out with a hyphen doesn't help much either, though we do get an N we didn't before. Target PhraseGPT TranslatedGPT MorseCorrect Morse"i-n"I NUGHT THE.. / -. ..- --. .... - / - .... ... -....- -.   It does do better on a string of alphabet letters that are tokenised separately. Target PhraseGPT TranslatedGPT MorseCorrect Morse"qzj"QUQ--.- ..- --.---.- --.. .---   Still, even in this case, GPT's zero-shot morse writing ability leaves quite a b
ac05a824-2c68-4b0a-be48-3adcdbd9fe96
trentmkelly/LessWrong-43k
LessWrong
Notes on Judgment and Righteous Anger This post examines the virtues of judgment and righteous anger. It is meant mostly as an exploration of what other people have learned about these virtues, rather than as me expressing my own opinions about them, though I’ve been selective about what I found interesting or credible, according to my own inclinations. I wrote this not as an expert on the topic, but as someone who wants to learn more about it. I hope it will be helpful to people who want to know more about these virtues and how to nurture them. What are these virtues? Judgment, or censure, is (at least) the ability and willingness to identify and call out something that is unjust or someone who is acting unjustly. Sometimes it goes beyond questions of justice: you can also judge someone unwise, unkind, unreasonable, and so forth. Righteous anger is a motivating fury, usually provoked by injustice. If you are righteously angry you have noticed something (or suffered something) that violates your sense of justice in a way that provokes an emotional response that encourages you to do something about it (and often, to signal your displeasure to those around you). We contain multitudes I have already written about the virtues of good temper, patience & forbearance, and forgiveness & clemency and you might wonder if I’ve lost the plot by pivoting to something like “righteous anger” that seems at first to contradict those. In my defense: 1. These might not be as contradictory as they appear. You may have the virtues of forbearance, forgiveness, good temper, and the like, but still have a limit beyond which you get angry in a virtuous way. It may be that there is a time for forbearance and a time for judgment, a time for good temper and a time to be furious, and that you need skill in both modes in order to best flourish as a human being. Even forgiveness guru Jesus thought that judgment came first, forgiveness after: “If a brother or sister sins, you must rebuke the offender, and if there is repentance
db8a1327-d2d5-472a-bd33-6d2d92de9ec2
trentmkelly/LessWrong-43k
LessWrong
Has anyone explored deliberate infection with milder coronavirus strains for cross-immunity? None
d292b0e2-eab5-4755-89b7-ac0a94b69aee
trentmkelly/LessWrong-43k
LessWrong
Meetup : LessWrong Hamburg - 4 Discussion article for the meetup : LessWrong Hamburg - 4 WHEN: 31 March 2014 07:00:00PM (+0100) WHERE: Michaelisstr.18, Hamburg We will be at yet another location chosen by Sam: Ristorante Buon Giorno, Michaelisstraße 18, 20459 Hamburg next to station Stadthausbrücke quickly reachable from central station. As we don't (yet?) manage to follow an agenda I just give a list of possible topics: Open beginning until the expected persons have arrived A short presentation about theme-centered interaction to improve our communication culture (S.Z.) Books and games. Time for discussing LW topics Planning of next meetup This Meetup will likely end at 10 PM due to the restaurant closing. The mailing list: https://groups.google.com/forum/#!forum/lesswrong-hamburg Discussion article for the meetup : LessWrong Hamburg - 4
853a9019-0f4a-4579-a430-ef431f5ba66f
trentmkelly/LessWrong-43k
LessWrong
Knowledge Base 1: Could it increase intelligence and make it safer? This series of posts presents the idea of building a knowledge database that allows for the collection and exchange of reliable information between people, computers, and between people and computers. It also discusses how this database could increase the intelligence of both people and computers, and increase the safety of AGI. Usefulness of credible information for people and computers Correct information often lets us make better decisions. AI will also perform better if it has access to a larger amount of credible information, both during learning and when answering questions. Therefore, this database could increase the collective intelligence of people and the intelligence of computers. Independent functioning of AI in the information space The database would be a vast and credible model of a significant part of the physical world we live in, understandable to people, computer programs, and AI. The fact that the model is understandable means that both people and computers can interact with it intelligently. A child learns about the world through interactions resulting from being "immersed" in the physical environment. One of the primary goals of a living organism is survival, and the absence of pain is an important indicator of whether it is achieving this goal well. How could AI independently understand the world and function within it? Instead of immersing AI-controlled robots in the physical environment of people, I propose to immerse intelligent agents in an information space that reflects our world. Interactions with this space would involve exchanging information (i.e., reading from and writing into this space) and evaluating its correctness. One of the primary goals of an intelligent agent would be to answer questions correctly, and its reputation would be an indicator of achieving this goal well. With this defined goal, AI could autonomously pursue objectives important for people. AI safety through human-readability Because the information space
8c2d4a7c-0412-4dbb-9db4-25a45beabdd1
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Google AI integrates PaLM with robotics: SayCan update [Linkpost] Back in April, Google AI announced [SayCan](https://say-can.github.io/), a project which integrated a language model (FLAN) with robotics in order to produce a robot which could follow instructions. For example, cleaning up a mess in a kitchen. ([LessWrong post](https://www.lesswrong.com/posts/wrkEnGrTTrM2mnmGa/retracted-it-s-time-for-ea-leadership-to-pull-the-short) from April which links to a tweet about SayCan.) **This week Google AI has released some new updates, dubbed PaLM-SayCan.** This involved upgrading the integrated language model to Google AI's top performing large language model (LLM) of 540-billion parameters, PaLM. [Website for PaLM-SayCan](https://sites.research.google/palm-saycan) [Blog post announcing PaLM-SayCan](https://ai.googleblog.com/2022/08/towards-helpful-robots-grounding.html) The following updates from this week are quoted from the original SayCan project website (linked in first sentence of this post, bold mine): > * [8/16/2022] We integrated SayCan with [Pathways Language Model (PaLM)](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), and updated the results. We also added [new capabilities](https://say-can.github.io/#new-capability) including drawer manipulation, chain of thought prompting and multilingual instructions. You can see all the new results in the updated [paper](https://say-can.github.io/assets/palm_saycan.pdf). > * [8/16/2022] Our updated results show that SayCan combined with the improved language model (PaLM), which we refer to as **PaLM-SayCan, improves the robotics performance of the entire system** compared to a previous LLM (FLAN). PaLM-SayCan chooses the correct sequence of skills 84% of the time and executes them successfully 74% of the time, **reducing errors by a half compared to FLAN**. This is particularly exciting because it represents the first time we can see how an improvement in language models translates to a similar improvement in robotics. > * [8/16/2022] We [open-sourced](https://say-can.github.io/#open-source) a version of SayCan on a simulated tabletop environment. > *Thanks to Jon Menaster for making me aware of this update from Google AI.*
a30bec1d-6dea-4ef3-bbe8-2fb278586e82
trentmkelly/LessWrong-43k
LessWrong
Maximal Ventilation At this point, one of the things I'm most concerned about with covid-19 is breathing air that has been recently breathed by other people. Six foot distancing would be pretty good if it quickly settled out of the air, but it's looking more like it can stay suspended in the air for extended periods. The ideal is outdoors. Even on a relatively calm day, the amount of air flow outdoors is enormous compared to what you might get inside. A gentle 1mph breeze clears an 90-ft diameter circle in a minute. An aggressive level of ventilation indoors is about a quarter of that, and most places get far less. So the more we can move outside, the better: I think it's foolish that we have restaurants open for indoor dining when outdoor dining gets you most of the same benefit with much less shared air. Especially when considering that eating means people will have their masks off while talking. Similarly, while people definitely need to be able to buy food, moving grocery stores to a model of offering only outdoor pickup would be safer for both the customers and the employees then letting the public in. The previous two summers my kids did an outdoor summer program in a park, where the only indoor component was bathrooms. It didn't run on rainy days, which would make it hard to plan around, but it's much more socialization for the level of risk than something indoors. Prompted by tuberculosis risk, 100 years ago, occasionally students would have school outdoors, even in the winter: When things can't be moved outdoors, it's still possible to dramatically increase the level of ventilation indoors. Dance organizers and people who cool their houses using fans are familiar with this: put a lot of fans in windows, use some fans inside to stir up the air, and think through the path the air will take. The best case is, one side of a room has fans blowing out each window, with cowls, and the other side of the room has a set fans blowing in. For example, if you have a 900 sqft classroom
f40180a1-8016-4c8c-ad44-92634253c91e
trentmkelly/LessWrong-43k
LessWrong
Intelligent behaviour across systems, scales and substrates Tl;dr In an earlier post, I introduced a metaphor for thinking about the epistemic landscape of AI alignment, and then described three epistemic strategies for making progress on the alignment problem.  In this post, I will double-click on the third strategy: learning from intelligence-in-the-wild. In particular, I will explore in more detail a core assumption this epistemic bet is based on—namely, that intelligent behaviour, as exhibited by different types of systems, both natural and artificial, share underlying principles which we can study and exploit.  An epistemic bet: Learning from Intelligence-in-the-Wild Earlier, I defined the epistemic strategy of learning from intelligence-in-the-wild as follows:  > Finally, the third approach attempts to chart out the possibility space of intelligent behaviour by looking at how intelligent behaviour manifests in existing natural systems ([3] in Fig. 1).  Instead of calling it a strategy, we could also call it an epistemic bet. This framing is helpful in emphasising the fact that in research, we are in the business of making bets. We cannot be entirely confident a given approach will bear fruit before we try it. But we can be more or less confident it will, and thus make informed bets. Furthermore, the term "bet" makes appeals to epistemic pluralism more intuitive (by reference to the reasons for why diversifying, say, one’s investment portfolio is a good idea).  In the case of this specific bet, the hope is that by studying intelligent behaviour as it manifests in existing biological or social systems, and by recovering principles that govern complex systems across various scales and modalities of implementation, we can gain substantial insights into how to design intelligent behaviour (with certain desired properties) in artificial systems. Premise: Intelligent behaviour across systems, scales and substrates What reasons do we have to expect this strategy will be fruitful? The promise of this epistemic bet is,
a4ab807d-aed2-4462-ba13-786ddc708835
StampyAI/alignment-research-dataset/arxiv
Arxiv
Liability regimes in the age of AI: a use-case driven analysis of the burden of proof 1 Introduction --------------- AI-based systems have the potential to bring some benefits and opportunities to our societies, generating and transforming new products and services in multiple application domains. In recent years, there has been a real technological revolution in the advancement of AI, and more specifically, in Machine Learning (ML) and Deep Learning (DL) methods. Unlike conventional computer algorithms, in which the programmer explicitly implements the instructions needed to solve a particular problem, ML/DL approaches 111From now on, the term AI will be used generically to refer to both ML and DL. are based on the idea that the solution can be obtained by accessing data sufficiently representative of the problem and using a training procedure that allows fitting a mathematical model to such data. This approach has been greatly benefited by the availability of massive data and improved computing power, which has enabled the use of increasingly complex models capable of solving increasingly sophisticated problems. In addition, this approach allows for continuous improvement of the system throughout its life cycle as more data and experience of use and interaction becomes available, making it a very powerful methodology. However, as this technologies advances, there is growing concern about the risks to safety and fundamental rights that its adoption may entail. These risks arise mainly from certain intrinsic characteristics of certain AI approaches, such as lack of causality, opacity, unpredictability or the uncertainty derived from self and continuous learning capabilities. And although there are legal mechanisms proposed to minimize these risks, such as ex-ante requirements in the recently proposed European regulatory framework for AI [[21](#bib.bib10 "Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artifical Intelligence (Artificial Intelligence Act) and amending certain union legislative acts")], general safety standards, or ethical and socially responsible AI approaches [[13](#bib.bib1 "Socially Responsible AI Algorithms: Issues, Purposes, and Challenges")], [[28](#bib.bib2 "Taking Principles Seriously: A Hybrid Approach to Value Alignment in Artificial Intelligence")], they do not exclude the possibility of some end user or bystander being harmed. In such cases, victims should be able to seek compensation, and the most typical way to do so is on the basis of liability regimes, in particular tort law [[29](#bib.bib56 "Basic questions of tort law from a comparative perspective")]. In addition, recent advances in AI allow the degree of human intervention and supervision to become less and less, which also brings with it the need to clarify the attribution of responsibility. Generally speaking, we can identify three legal frameworks by which victims can obtain compensation for product-induced damages [[17](#bib.bib52 "Expert group on liability and new technologies - new technologies formation, liability for artificial intelligence and other emerging digital technologies")]: * Fault-based liability: where the injured parties or claimants have to prove, in principle, that the defendant (or its employees in the case of vicarious liability) caused the damage intentionally or negligently (i.e., wrongdoer). This involves identifying the applicable *standard of care* the defendant should have fulfilled, and proving that it was not fulfilled. In the language of negligence the issue would be whether the product or some of its key components were negligently designed, manufactured, maintained, marketed, operated or used. * Strict-based liability (a.k.a. risk-based liability): based on the understanding that someone is allowed to use a dangerous object or perform a risky activity for her own purposes, so this person also bears the loss if such risk materialises [[27](#bib.bib57 "Comparative law study on civil liability for artificial intelligence")]. Therefore the victim does not need to prove the misconduct on the part of any wrongdoer. Instead, the injured parties only have to prove that the risk arising from the sphere of the liable party actually materialised (i.e., the risk subject to strict liability materialises). * Product-based liability222We focus primarily on product liability at the EU level. Note that although in most US states product liability is linked to a defect, this is not applicable in all cases.: where victims can claim against the producer (manufacturer, importer, supplier or seller) for a defect present at the time the product was placed on the market. Injured parties or claimants have to prove that the product was defective (irrespective of fault) and the causal link between the defect and the damage. A defectiveness product is a product with an unintended flaw, i.e., the finished product does not conform to the producer’s own specifications or requirements or to general safety requirements. In this sense, the product fails to provide the safety that the public at large is entitled to expect (i.e., *standard of safety*), and this lack of safety causes the damage. In the language of defectiveness (similar to fault) the product or some of its key components may have design, manufacturing, maintenance or marketing defects [[46](#bib.bib58 "Machines without principals: liability rules and artificial intelligence")]. The impact and challenges that AI systems have on liability rules have been extensively studied over the past few years. One of the first points of discussion focuses on the legal personality that could be attributed to an autonomous AI-based system that makes decisions without human intervention or supervision. As highlighted in [[11](#bib.bib55 "Liability for damages caused by artificial intelligence")], neither national nor international law recognizes AI as a subject of law, which means that an AI system cannot be held personally liable for the damage it causes. The question to be asked would be who is liable for damages caused by the actions of AI. On the one hand, when the AI system simply acts as a tool to provide humans with additional knowledge, recommendations, etc., the person operating that tool would then be ultimately responsible for any decisions (e.g., the learned intermediary doctrine [[41](#bib.bib54 "Are current tort liability doctrines adequate for addressing injury caused by ai?")]). On the other hand, as AI becomes more autonomous, it will be more difficult to determine who or what is making decisions and taking actions [[40](#bib.bib53 "Transparency and fairness in machine learning applications")]. Attributing legal personality to AI systems may be a possibility in some cases, but as concluded by the Expert Group on Liability and New Technologies set up by the European Commission for the purposes of liability [[17](#bib.bib52 "Expert group on liability and new technologies - new technologies formation, liability for artificial intelligence and other emerging digital technologies")], it is not necessary to give autonomous systems a legal personality, as the harm AI systems may cause can and should be attributable to existing persons or organizations. Harm caused by AI systems, even fully autonomous ones, is generally reducible to risks attributable to natural persons or existing categories of legal persons, and where this is not the case, new laws directed at individuals are a better response than the creation of a new category of legal person [[1](#bib.bib51 "Punishing artificial intelligence: legal fiction or science fiction")]. Another point of debate has been to establish which *liability framework* is the most appropriate for AI systems. The main issue is to decide whether to apply a standard of care (negligence-based) or a standard of safety (defect-based). As discussed in [[46](#bib.bib58 "Machines without principals: liability rules and artificial intelligence")], and unlikely as it may seem, this issue was addressed by a court over six decades ago (in 1957) concerning a car accident due to one pedestrian (the claimant) crossing inappropriately. In its analysis, the court argued that the driver, however efficient, is not a mechanical robot capable of avoiding an accident in such circumstances. Implicitly, the court argued that an autonomous vehicle must operate according to a safety standard. As explained further in [[46](#bib.bib58 "Machines without principals: liability rules and artificial intelligence")], in modern product liability law, such a standard would likely be set on a risk-utility basis (strict liability). In the past, claims related to product failures were initially handled under negligence theories, largely because the focus was on the conduct of humans, not the performance of machines. However, as negligence claims related to product failures became more difficult to prove, strict liability principles took root to govern product liability cases, especially in the case of complex and highly autonomous products such as modern AI systems. As suggested by [[25](#bib.bib50 "Liability for ai decision-making: some legal and ethical considerations")], liability for a defective product applies when, among other possibilities, a reasonable alternative design (or manufacturing, maintenance or marketing processes) could have avoided or limited foreseeable risks of harm. For complex AI systems, deciding who is responsible and for what when someone has been injured can be extremely complicated as many parties come into play in the complex supply chain. For example, there are, among others: AI developers, developers of training frameworks, data collectors, annotators, controllers and processors, providers of AI systems integrated in other AI systems, manufacturers of the products incorporating the AI systems, users of these products, etc. On top of that, we find the aforementioned specific characteristics of AI systems (i.e., lack of causality, opacity, unpredictability and self and continuous learning) which makes it substantially more difficult to demonstrate causal relationships [[4](#bib.bib49 "The artificial intelligence black box and the failure of intent and causation")]. Therefore, the burden of proving a design alternative that might have avoided harms can be huge and, in some cases, unfeasible for the victim to address. As explained in [[8](#bib.bib48 "Towards intelligent regulation of artificial intelligence")] with respect to arguing and evaluating how complex algorithms (e.g., AI systems) may have caused harm, it is obvious that courts and injured parties remain at a disadvantage compared to the expert developers producing the systems. The problem of the difficulty in proving causation for AI systems has been clearly identified in the literature, whether for fault-based or defect-based liability regimes. As described by the Expert Group on Liability and New Technologies, regarding wrongfulness and fault: “*In the case of AI, examining the process leading to a specific result (how the input data led to the output data) may be difficult, very time-consuming and expensive*”. And with respect to product liability: ”*the complexity and the opacity of emerging digital technologies complicate chances for the victim to discover and prove the defect and prove causation*” [[17](#bib.bib52 "Expert group on liability and new technologies - new technologies formation, liability for artificial intelligence and other emerging digital technologies")]. In view of these difficulties, experts and academics propose various alternatives to alleviate the burden of proof on victims, including the reversal of the burden of proof, rebuttable presumptions, or the application of strict liability regimes, among others [[27](#bib.bib57 "Comparative law study on civil liability for artificial intelligence")]. From a technical perspective, the need of explainable AI has been also proposed as a mean to overcome the AI opacity issue [[33](#bib.bib4 "Black is the new orange: how to determine AI liability")], [[23](#bib.bib5 "AI Opacity and Explainability in Tort Litigation")]. In this paper, we focus on causation in the context of AI to present a set of use cases, including legally relevant technical details, which illustrate the specific difficulties involved in AI systems when it comes to proving causation, either from a standard of care or from a standard of safety point of view. Contrary to previous works [[17](#bib.bib52 "Expert group on liability and new technologies - new technologies formation, liability for artificial intelligence and other emerging digital technologies")], [[27](#bib.bib57 "Comparative law study on civil liability for artificial intelligence")], [[19](#bib.bib3 "The AI Liability Puzzle and A Fund-Based Work-Around")] the objective of our use case analysis is not only to test the legal or insurance issues, but to address the technical difficulties that even in the best case scenario (i. e., with access to the documentation and to the logs) an expert would have to face in order to prove fault or defect. We focus on systems that can produce physical damage, so the link between defect or fault and the harm should be also established. We therefore omit pure software based cases such as the recent Australian case of ACCC v Trivago [[23](#bib.bib5 "AI Opacity and Explainability in Tort Litigation")]. The selected examples represent recent technological developments, potentially available within a relatively short time horizon, which may pose risks to third parties. In this analysis, we link the difficulties in addressing the burden of proof to the characteristics of certain AI systems such as lack of causality, opacity and unpredictability. The structure of the rest of the paper is as follows. First, we describe those specific features of AI that we consider pose a problem in demonstrating causality when attributing responsibility in liability regimes. Second, we present the methodology for generating the use cases. Third, we describe the use cases and, finally, provide a general discussion, conclusions and future work. 2 Specific features of certain AI systems ------------------------------------------ Unfortunately, there is no commonly accepted technical definition of AI that is valid for multiple contexts [[38](#bib.bib47 "AI watch. defining artificial intelligence 2.0. towards an operational definition and taxonomy for the ai landscape")]. Generally speaking, we can say that AI is a broad concept related to machines capable of making decisions and performing tasks in a way that we would consider intelligent or human-like. However, although AI is often discussed in general terms, most of the features linked to safety and human rights risks are mainly present in recent ML/DL approaches. Indeed, in these cases the methodology changes substantially from conventional computational systems. Instead of explicitly implementing the instructions needed to address a particular problem, data-driven AI techniques capture data sufficiently representative of the problem to be solved, and fit a complex mathematical model with many parameters that are determined during a training process. This approach has been conveniently reinforced by the growing availability of massive datasets, as well as increasingly powerful computational systems. They have allowed the use of more and more sophisticated mathematical models with a greater number of parameters capable of dealing with increasingly complex problems. Moreover, this approach allows for continuous training of the system throughout its life cycle as more data and experience of use and interaction becomes available. This enables continuous improvement without a substantial change in methodology. On the one hand, the complexity and power of AI systems allows to implement solutions that perform tasks and make decisions with increasing autonomy333Note that some research communities prefer to use ”automation” rather than ”autonomy”. The trend we observe is that both concepts can be used interchangeably, although autonomy can also contemplate actions beyond the intended domain., understood as *the capability to perform with an absent or low degree of external influence or human involvement* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]. However, it is important to note that complexity and autonomy are not exclusive characteristics of AI systems. Complexity is a broader concept related to the whole system (number of interconnected elements, complexity of the supply chain, etc.) and to the operating environment (number and type of agents, interactions, structure of scenarios, etc.). The level of autonomy, although dependent on the capacity of the system, is still a design variable. On the other hand, these approaches may suffer from multiple limitations that need to be properly addressed. These include potential bias in the data that is indirectly incorporated into the trained model [[42](#bib.bib46 "Fair and unbiased algorithmic decision making: current state and future challenges")] 444Bias in AI can be more broadly defined as *an anomaly in the output of AI systems, due to the prejudices and/or erroneous assumptions made during the system development process or prejudices in the training data, so the results from the AI system cannot be generalised widely* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]., possible problems of overfitting [[48](#bib.bib45 "An overview of overfitting and its solutions")], i.e., an excessive fit to the data that leads to a lack of generalization555*Generalization* refers to the ability of the model to adapt adequately to new, previously unseen data, drawn from the same distribution as that used to create the model., or the problem known as the curse of dimensionality666The amount of data needed to represent a problem grows exponentially with the dimension of the input space (i.e. with the number of variables or features). [[45](#bib.bib44 "The curse of dimensionality in data mining and time series prediction")], especially relevant in problems with multiple inputs. But again, it is important to note that bias issues, or the ability to find a complete or sufficiently general solution to a problem, or the difficulties in finding solutions to problems in high dimensional spaces, are not unique to AI systems. Bias is introduced in both algorithm design and data, but it is a feature of virtually all designs carried out by humans and, of course, is an intrinsic feature of human decision-making. The dimensionality problem is an extrinsic feature that refers to the complexity of the operating scenario and the number of variables involved in the system output. This is also related to the need for data, as the number of data samples required for an acceptable representation of the input space increases exponentially with dimensionality. Finally, overfitting or lack of generalisation, while tending to be more prominent when systems are more complex and flexible (e.g. ML/DL), other more conventional computational systems (e.g. expert systems or a set of fixed rules) can suffer from the same problem, for example, due to bad specification or poor design. In our view there are four specific characteristics of certain AI/ML systems that could pose significant challenges to proving causality in liability regimes, namely, lack of causality, opacity, unpredictability and self and continuous learning capability. These features are described in detail below. In Fig. [1](#S2.F1 "Figure 1 ‣ 2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof") we illustrate all the aforementioned characteristics of AI/ML systems and their impact on the difficulty when proving causation. As can be seen, their impact and degree of exclusivity in relation to AI systems is conceived more as a continuum than as a discrete categorisation. ![Notional representation of the intrinsic and extrinsic features of AI systems and their impact on proving causation.](https://media.arxiv-vanity.com/render-output/7078816/x1.png) Figure 1: Notional representation of the intrinsic and extrinsic features of AI systems and their impact on proving causation. ### 2.1 Lack of causality Causality can be defined as a generic relationship between two variables: the effect and the cause that gives rise to it. As described in [[26](#bib.bib37 "A survey of learning causality with data: problems and methods")], when learning causality with data, we need to be aware of the differences between statistical associations and causation. The current success of data-driven AI is mainly based on its ability to find correlations and statistical patterns in large-scale, high-dimensional data (i.e., statistical modelling). And not just any type of data, but usually *independent and identically distributed (i.i.d)* data. That is, random observations that are not dependent on each other and have the same probability of occurring (e.g, rolling a die several times). Problems can be intrinsically i.i.d, or they can be made approximately i.i.d. and, in such cases, convergence of a learning algorithm can be theoretically guaranteed at the lowest achievable risk [[39](#bib.bib36 "Toward causal representation learning")]. Therefore, it is not surprising that, with sufficient data, data-driven AI can surpass human performance. The i.i.d. assumption can be violated in the “independent” part and/or in the “identically distributed” part. On the one hand, if there are statistical dependencies between variables, this assumption does not allow explicit modelling and learning of the causal relationships between them. On the other hand, if the distribution from which the data is derived differs slightly between the training phase and real-world operation, the data-driven AI system often performs poorly. For example, computer vision systems can fail recognizing objects when they appear under new lighting conditions, different viewpoints, or against new backgrounds [[3](#bib.bib34 "ObjectNet: a large-scale bias-controlled dataset for pushing the limits of object recognition models")]. In most cases, real-world data do not fully satisfy the i.i.d. hypothesis. As described in [[35](#bib.bib35 "Elements of causal inference. foundations and learning algorithms")] and [[39](#bib.bib36 "Toward causal representation learning")], *generalizing well outside the i.i.d. setting requires learning not mere statistical associations between variables, but an underlying causal model*. Learning causal relations is highly complex, and it requires collecting data from multiple domains (i.e., multiple distributions) as well as the ability to perform interventions (i.e., interventional data) that trigger a change in the data distribution [[39](#bib.bib36 "Toward causal representation learning")]. Causality (or lack thereof) is highly correlated with the ability of data-driven AI systems to respond to unseen situations (linked to unpredictability and generalization capabilities) and to remain robust when some interventions change the statistical distribution of the target task. This includes adversarial attacks, where carefully selected perturbations of the inputs, typically imperceptible or inconspicuous to humans (e.g., adding invisible noise to images, or inverting letters in a text) can induce large variations in the system’s outputs [[34](#bib.bib38 "The limitations of deep learning in adversarial settings")] (constituting violations of the i.i.d. assumption). In addition, learning causality can help ML systems to better adapt to other domains (e.g., multi-task and continuous learning), and improve interpretability and explainability of ML systems [[14](#bib.bib33 "Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications")] (linked to opacity as presented in Section [2.2](#S2.SS2 "2.2 Opacity ‣ 2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof")). However, despite numerous advances in this field [[35](#bib.bib35 "Elements of causal inference. foundations and learning algorithms")], learning causal relationships still poses numerous challenges and so far, according to [[39](#bib.bib36 "Toward causal representation learning")], data-driven AI *has neglected a full integration of causality*. ### 2.2 Opacity The formal definition of opacity refers to obscurity of meaning, resistance to interpretation or difficulty of understanding. In the AI domain it is also known as the black-box effect [[10](#bib.bib43 "Can we open the black box of AI?")] since the decision-making process with ML appears inscrutable from the outside. Even when AI experts, or the creators of the AI system themselves, examine the system (source code of the model and the training process, the model architecture, the trained parameters, the training, validation and test datasets, etc.), it is difficult or impossible to understand how they combine to form a decision. Opacity arises from the inability to provide human-scale reasoning from complex AI models [[9](#bib.bib42 "How the machine ‘thinks’: understanding opacity in machine learning algorithms")]. This intrinsic feature of certain AI systems has prompted the development of transparency requirements, whether horizontal (e.g., the AI Act [[21](#bib.bib10 "Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artifical Intelligence (Artificial Intelligence Act) and amending certain union legislative acts")]) or sector-specific (e.g., possible requirements in the field of autonomous vehicles as described in [[22](#bib.bib41 "Trustworthy autonomous vehicles")]). The compliance with transparency requirements (which include measures to address traceability777Traceability of an AI system refers to the *capability to keep track of the processes, typically by means of documented recorded identification*, or to the *ability to track the journey of a data input through all stages* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]., interpretability 888AI models are interpretable *when humans can readily understand the reasoning behind predictions and decisions made by the model* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]., or explainability999Explainability in AI can be understood as a *feature of an AI system that is intelligible to non-experts*, or as *methods and techniques in AI such that the results of the solution can be understood by humans*[[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]) will alleviate the burden of proving causality. However, the well-known trade-off between accuracy and interpretability in AI systems remains as an obstacle, i.e., more accurate models are less interpretable and vice versa. Furthermore, attempts to explain black-box machine learning models may not be sufficient to demonstrate causality [[37](#bib.bib40 "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead")]. Therefore, despite efforts to develop interpretable systems by design or implement post-hoc explanations, the problem of opacity of complex AI systems will remain one of the most critical issues for victims when trying to prove causation in either fault or defect schemes. ### 2.3 Unpredictability Although unpredictability has often been associated with the self and continuous learning capability of AI/ML systems, this characteristic is intrinsic to static or ”frozen” systems, i.e. systems that do not continue to learn and adapt while in operation. Unpredictability in data-driven AI systems is mainly due to two reasons. First, in the case where the dataset is not sufficiently representative of the problem to be addressed by the machine learning model. Regardless of the generalization capability of the model and the training process, the solution found in the poorly represented regions of the input space will generate unpredictable results. We illustrate this effect in Fig. [2](#S2.F2 "Figure 2 ‣ 2.3 Unpredictability ‣ 2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof") (left). As can be seen, if the underlying (unknown) pattern of what is to be learned is very complex and the training, validation and test data are not sufficiently representative, the function learned by the data-driven AI model may not fit the real nature of the problem and generate totally unpredictable values (which may lead to safety issues depending on the scenario). This problem is very significant in cases of high dimensionality and complexity of the input space, the operating environment, the number and type of interactions with agents, etc., where obtaining sufficiently representative data is a major challenge. The second reason refers to the lack of generalization or overfitting of the learned function. Even in the case where the input space is reasonably well represented by the data set (which, after all, is always a limited sample of the entire input space), when the learning process over-fits the data, the outcome of the learned function for samples not available during training, validation and testing can be totally unpredictable. This effect is illustrated in Fig. [2](#S2.F2 "Figure 2 ‣ 2.3 Unpredictability ‣ 2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof") (right). ![Illustration of unpredictability issues due to (left) poor representation of the input data space and (right) overfitting. The underlying function represent the desired, unknown pattern. Two examples of erroneous and unpredictable results (red circles) compared with the expected outcome (gray circles) are depicted on each case. This is a simple scenario with only one input. Normally the dimension of the input data space is much larger. ](https://media.arxiv-vanity.com/render-output/7078816/x2.png) Figure 2: Illustration of unpredictability issues due to (left) poor representation of the input data space and (right) overfitting. The underlying function represent the desired, unknown pattern. Two examples of erroneous and unpredictable results (red circles) compared with the expected outcome (gray circles) are depicted on each case. This is a simple scenario with only one input. Normally the dimension of the input data space is much larger. In both cases, these effects imply that even small, almost imperceptible, linear changes in inputs can produce abrupt and unpredictable changes in outputs. This becomes more pronounced the larger the non-linear relationship between inputs and outputs. Moreover, it is these constraints that allow exploiting the aforementioned adversarial attacks. In addition, it is important to highlight certain types of AI systems, that is, recurrent models [[31](#bib.bib39 "Recurrent neural networks for prediction: learning algorithms, architectures and stability")], in which the output not only depends on the inputs but also on the internal state of the model. This means that the same input at two different time instants (with two different model states) may provide different outputs. This feature allows dealing with dynamic problems where states and inputs prior to the instant of decision are relevant. But this feature also implies that, given the same inputs, small variations in the state of the model can generate different results, which implies a source of unpredictability. Finally, it should be noted that unpredictability can lead to significant problems of repeatability101010*The measurement can be obtained with stated precision by the same team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same location on multiple trials* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]., reproducibility111111*The measurement can be obtained with stated precision by a different team and a different measuring system, in a different location on multiple trials* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]. or replicability121212*The measurement can be obtained with stated precision by a different team using the same measurement procedure and the same measuring system, under the same operating conditions, in the same or a different location on multiple trials* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]. While conventional computational approaches are less likely to provide different results under the same operating conditions, the unpredictability issues described above pose an additional difficulty in ensuring system consistency and robustness. ### 2.4 Self and continuous learning Although the terms self-learning and continuous learning have often been used in a somewhat vernacular way, for example by mixing them, or by referring to only one of them with both meanings, these are two distinct terms with different meanings. On the one hand, self-learning refers to the ability of the AI system to *recognize patterns in the training data in an autonomous way, without the need for supervision* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]. This definition implicitly refers to *unsupervised learning* (i.e., learning that makes use of unlabelled data during training [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]) but explicitly stating that this is done without human supervision (i.e., autonomously). Nevertheless, this term does not specify if the process is performed off-line or while the system is in operation. Another definition refers to *learning from an internal knowledge base, or from new input data, without introduction of explicit external knowledge* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]. This definition implicitly mentions *domain adaptation* and might suggest that the process is autonomous. On the other hand, continuous learning refers to *incremental training of an AI system that takes place on an ongoing basis during the operation phase of the AI system life cycle* [[20](#bib.bib32 "Glossary of human-centric artificial intelligence")]. Therefore, this term explicitly excludes off-line learning as it specifically refers to on-line learning while in operation. Although the level of human supervision is not included in the definition, we can implicitly assume that if the operation of the system is autonomous, continuous learning also takes place autonomously. The aforementioned characteristics of lack of causality and unpredictability make this approach high risk. Additionally, it is worth mentioning an effect known as catastrophic forgetting. That is, under certain conditions, the process of learning a new set of patterns (in this case continuously during operation) suddenly and completely interferes, or even erases, the model’s knowledge of what it has already learned [[24](#bib.bib31 "Catastrophic forgetting in connectionist networks")]. Self and continuous learning are crucial for systems operating in changing environments, as they enable the acquisition, fine-tuning, adaptation and transfer of increasingly complex knowledge representations. For example, they are widely used in contexts where adaptation to the profile of each user is required. They are also commonly used during the development phase of complex AI systems (e.g., reinforcement learning). However, once systems are deployed, and in cases where they may cause damage to persons or property (triggering liability mechanisms), such approaches may involve unacceptable risks. Moreover, systems may also be damaged, with the costs that this entails. From the perspective of fault- and product-based liability regimes, it is clear why self and continuous learning are a major challenge, as they are directly related to the question of foreseeability [[36](#bib.bib30 "Whose robot is it anyway?: liability for artificial-intelligence-based robots")]. It is reasonable to assume that a defendant will only be held liable if it could reasonably foresee and prevent the potential results of an action [[6](#bib.bib29 "Artificial intelligence & damages: assessing liability and calculating the damages")]. If an AI system with self and continuous learning capabilities is placed on the market and causes harm to a person, it would be very difficult for the claimant to prove that the system was negligently designed or defective, and especially that these issues existed when the system left the hands of the providers [[11](#bib.bib55 "Liability for damages caused by artificial intelligence")]. That is one of the reasons for suggesting that the development risk defence, which allows the provider to avoid liability if the state of scientific and technical knowledge at the time when the product was put into circulation was not such as to enable the existence of the defect to be discovered, should not apply in these cases [[17](#bib.bib52 "Expert group on liability and new technologies - new technologies formation, liability for artificial intelligence and other emerging digital technologies")]. Safety regulations, or specific AI regulations (e.g., the AI Act [[21](#bib.bib10 "Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artifical Intelligence (Artificial Intelligence Act) and amending certain union legislative acts")]), are conceived to certify that a specific “version” of a system complies with established requirements. Enabling self and continuous learning is highly unlikely, as these features may substantially modify the behaviour of the system after certification. This could lead to cases where two AI systems of the same type, exposed to different environments, would differ in their behaviour over time. For example, in the context of vehicle safety regulations, there is preliminary consensus that self and continuous learning should be not be allowed, as they are incompatible with existing regulatory regimes and safety expectations [[44](#bib.bib28 "Consolidation of the comments on grva-11-03 (artificial intelligence and vehicle regulations)")]. Another example can be found in the framework of the AI Act. High-risk AI systems that “continue to learn” after being placed on the market or put into service shall be developed with appropriate mitigation measures. Whenever there is a change in the behaviour that may affect compliance with the regulation (i.e., a substantial modification) the AI system shall be subject to a new conformity assessment [[21](#bib.bib10 "Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artifical Intelligence (Artificial Intelligence Act) and amending certain union legislative acts")]. In other words, self and continuous learning are only allowed if the provider can predetermine and foresee the changes in the performance. Depending on the context, this may be virtually impossible. For example, we can consider cases where behavioural adaptation through self and continuous learning is necessary to improve the interaction with end-users, but only if the change in the AI system does not affect their safety. Accordingly, when we refer to the issue of lack of foreseeability or predictability of AI systems, and how it challenges different liability regimes, it is more realistic to mainly focus on the intrinsic features of lack of causality and unpredictability rather than on self and continuous learning. It is reasonable to assume that safety or AI specific regulations will not allow for the integration of self and continuous learning if they can affect the behaviour of the system after certification. That is the main reason why these characteristics appear with less impact in Fig [1](#S2.F1 "Figure 1 ‣ 2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"). 3 Methodology -------------- The main goal of this work is to develop a number of use cases to illustrate the specific difficulties of AI systems in proving causation, and to address the technical challenges that an expert would face in proving fault or defect, including legally relevant technical details. In this section we describe the inclusion criteria and the proposed structure for describing the use cases. ### 3.1 Inclusion criteria For the identification and elaboration of the use cases we have developed a three-pillar inclusion criterion that comprises multiple elements related to the technology, the liability framework and the risk profile. An overview of the proposed methodology is shown in Fig. [3](#S3.F3 "Figure 3 ‣ 3.1 Inclusion criteria ‣ 3 Methodology ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"). The proposed sets of criteria would need to be fulfilled cumulatively. In the following, the different criteria used for the development of the use cases are described. ![A schematic view of the proposed methodology to develop the use cases.](https://media.arxiv-vanity.com/render-output/7078816/x3.png) Figure 3: A schematic view of the proposed methodology to develop the use cases. #### 3.1.1 Technology aspects From a technology point of view, the proposed approach takes into account two main aspects. * Technology readiness: the use cases should include AI systems that are sufficiently realistic to be described with a sufficient degree of granularity and confidence. This could refer to experimental platforms that could have been used in relevant but restricted environments or piloted under certain conditions in operational environments. Therefore, they do not necessarily have to be products currently on the market, but they should have a relatively high level of technological maturity and availability. For example, if we use the Technology Readiness Levels (TRLs) [[32](#bib.bib6 "AI Watch: Revisiting Technology Readiness Levels for relevant Artificial Intelligence technologies")], we focus on systems between TRL 5 and 7. * Specific AI features: we focus on the specific features of certain types of AI described in Section [2](#S2 "2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"), that is, lack of causality, unpredictability, opacity and, in some cases, self and continuous learning. As described above, we can assume that safety or AI specific regulations will not allow self and continuous learning if they can affect the behaviour of the AI system in a potentially dangerous way. #### 3.1.2 Features of the liability framework As far as the liability framework is concerned, we consider three main aspects: the liability regime, the lack of insurance and the type of victim-AI provider/user relationship. An additional criterion (the type of damage) can also be linked to this section, but is mainly associated with the general context. * Liability regimes: the main frameworks considered are the product liability and fault-based liability regimes, based on proof of defect and fault, respectively. The idea, therefore, is to avoid scenarios where strict liability is directly applicable, or where it is at least compatible with product- and fault-based approaches. * Lack of insurance: since the purpose is to identify situations where the victim must bear the burden of proof, it is important to avoid approaches where the damage is covered by insurance [[19](#bib.bib3 "The AI Liability Puzzle and A Fund-Based Work-Around")]. This is the case, for example, for autonomous vehicles, which will be covered by motor insurance schemes. This criterion is strongly related to the avoidance of strict liability, which is usually supported by insurance. * Victim-AI provider/user relationship: To trigger both contractual and non-contractual civil liability mechanisms, the victim and the AI system provider or user should not have any contractual relationship. That is, scenarios should be based on the damage caused to third parties (e.g. bystanders). Therefore, non-private space and applications would be considered suitable for the use cases included. #### 3.1.3 Risk profile Once the type of relationship between the victim and the provider or user of the AI system has been established, the next step is to define the risk profile, depending on the type of harm and its significance. * Type of harm: we are interested in AI systems that can cause some kind of compensable harm within the different liability regimes, i.e., personal injury, including medically recognised psychological harm, and damage to property. * Significance of harm: although the use cases may not necessarily put the ”general public” at risk, the possibility of harm to third parties in the envisaged scenarios should be reasonably high. In other words, the scale of potential accidents should not be negligible. To realistically achieve this goal, on the one hand, we are looking for applications operating in public environments. On the other hand, AI systems must be embodied in some kind of mobile robotic platform, whose operating dynamics pose some danger to the bystanders. #### 3.1.4 Autonomy As mentioned in Section [2](#S2 "2 Specific features of certain AI systems ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"), we refer to autonomy as the ability of the AI system to perform without external influence or human involvement. This feature cuts across the three sets of requirements defined for the development of the inclusion criterion. On the one hand, and although the design of autonomous systems is not unique to AI-based approaches, achieving high degrees of autonomy, without requiring human intervention or supervision, is possible today thanks to recent technological advances in AI. On the other hand, the fact that the system does not require human intervention allows liability regimes to be triggered not towards a person responsible for operation or supervision, but towards the AI provider or user. This way, it would be difficult to link a human action or omission to a certain output that caused the damage. Finally, the ability of the system to operate in public environments in a fully autonomous fashion is also related to the risk profile, the type of damage and its significance. ### 3.2 Expert knowledge Followed by the specification of the inclusion criteria, we considered a number possible scenarios that would fulfil the aforementioned requirements. At this stage, as illustrated in Fig. [3](#S3.F3 "Figure 3 ‣ 3.1 Inclusion criteria ‣ 3 Methodology ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"), the interaction among the members of our team with different disciplinary background (law, engineering, computer science, robotics, human cognition, human factors and social science) was pivotal for the final identification of the use cases. ### 3.3 Use cases structure As can be observed in Fig. [4](#S3.F4 "Figure 4 ‣ 3.3 Use cases structure ‣ 3 Methodology ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"), for each use case, we address the following steps. First, we describe the main characteristics of the AI system(s) involved in the application and the operating environment. Second, we describe the hypothetical events leading to the damage, as well as the damage itself. Third, we identify the possible causes of the accident. Fourth, we identify the possible liable parties. And finally, we detail the procedure for obtaining compensation, highlighting the requirements and the main difficulties faced by the victim. ![Proposed structure to describe the use cases.](https://media.arxiv-vanity.com/render-output/7078816/x4.png) Figure 4: Proposed structure to describe the use cases. 4 Case studies --------------- ### 4.1 General Considerations This work focuses on fault- and product-based liability regimes. Although a strict-based liability regime could apply in some cases, where the claimant would only have to prove that the risk arising from the sphere of the liable party (i.e., the user or the operator) materialised, it is very likely that the user/operator would claim liability against the providers of the end product or the individual AI components. Therefore, the need to prove fault or defect could also apply. Under national tort law, the claimants would have to prove that the defendant caused the damage negligently. That involves proving not compliance with the applicable standard of care (i.e., fault) and the causal relationship with the damage. Under product liability, the claimants would have to prove that the AI-based product was defective and the causal link with the damage. In both cases, expert opinion, access to technical documentation on the design, functioning and operation of the system, as well as access to relevant data and system logs (e.g, inputs, outputs, internal states of the subsystems) corresponding to the last few minutes before and after the accident, would be required. The expert would have to understand in considerable detail the overall functioning of the system, components involved, inter-dependencies, etc., and be able to interpret the available data. All this poses in itself a considerable burden and cost for the claimant. Once the above requirements are fulfilled, the expert must face the causal analysis to prove fault or defect and the link with the damage. Last but not least, we assume that victims can claim liability against multiple liable parties, including product manufacturers, users or operators, and providers of AI systems integrated in the products. Although the burden of proof will depend on the type of defendant, our approach assumes the worst case scenario where the claimant raises the claim against one or many of the AI systems providers, needing to go into the technicalities of how the AI system works and what its state was at the time of the accident. ### 4.2 Autonomous Urban Cleaning Robots #### 4.2.1 Characteristics of the AI system and operational environment An autonomous fleet of cleaning robots operates in pedestrianised public areas. The robots are equipped with multiple sensors (e.g. cameras, LiDAR, radar, ultrasound, GPS, etc.), digital information (e.g., digital maps), connectivity features, including communication between the robots and between the robots and the infrastructure, and actuators to perform the cleaning tasks. The robots include multiple AI/ML systems, each one responsible for a particular task (e.g. perception systems for the detection and location of litter and dirt, robot localization and mapping, detection of obstacles, trajectory planning or lateral and longitudinal control of the platform, etc.). Some examples of the current state of this kind of technology are depicted in Fig. [5](#S4.F5 "Figure 5 ‣ 4.2.1 Characteristics of the AI system and operational environment ‣ 4.2 Autonomous Urban Cleaning Robots ‣ 4 Case studies ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"). ![From left to right, three examples of the current state of this kind of technology: the systems developed by ENWAY ](https://media.arxiv-vanity.com/render-output/7078816/x5.png) Figure 5: From left to right, three examples of the current state of this kind of technology: the systems developed by ENWAY [[18](#bib.bib25 "Autonomous city cleaning")], Trombia [[43](#bib.bib26 "Trombia free. electric and autonomous street sweeper")] and Boschung [[7](#bib.bib27 "Urban-sweeper s2.0 autonomous")]. Each cleaning robot belongs to a fleet deployed throughout the city. An employee is in charge of defining the operation areas to be cleaned (i.e., the missions) and monitoring multiple robots in simultaneous operation from a remote-control centre. The fleet can coordinate the safe cleaning of the selected region, the interaction with pedestrians, and the avoidance of obstacles, with a high degree of autonomy. The role of the human operator is of a supervisory nature following the human oversight strategy defined by the cleaning robot provider. #### 4.2.2 Description of the events leading to damage A colourful baby stroller is parked in front of an advertising banner with similar colour patterns while the baby’s guardian looks at a nearby shop window. One of the cleaning robots does not seem to recognize the stroller as an obstacle and collides with it. The stroller is damaged and the baby slightly injured. #### 4.2.3 Potential causes of the accident The accident described in this use case could have been caused by any of the following issues: * A flaw in the vision component of the system, the AI perception model, that caused a failure in the detection of the trolley because it was somehow camouflaged with the background (due to the presence of an advertising banner with colour and pattern similar to the ones of the trolley). This led to an image segmentation error (i.e., a false negative) that considered the stroller as part of the background of the banner. * An AI cybersecurity vulnerability in the perception model that was exploited by a third party causing the AI to malfunction. For example, an adversarial machine learning vulnerability could have been exploited by vandals by placing printed stickers on the advertising banner to prevent the detection of objects around it. * An attack targeting the robot’s sensors, such as blinding, jamming or spoofing. In the absence of mechanisms to mitigate this type of denial-of-service attacks, the perception and planning systems can fail to detect the baby stroller, preventing the robot to avoid the collision. * A cyberattack that led to the compromise of any of the robot’s Electronic Control Units (ECUs). An attacker could gain unauthorised access and force the system to take an unsafe action causing the robot to collide with the stroller. The attacker could launch such an attack locally (e.g., through physical access to the robot) or remotely, either from the vicinity (e.g., through WiFi or Bluetooth access) or from Internet potentially from anywhere in the world. On top of that, the presence of the flaw, or the conditions in which the accident took place, could have the following origins: * Failure due to an updated version of the perception system devised to reduce the number of false positives and negatives of previous versions (resulting in many regions not being cleaned correctly). The confidence threshold for considering a detection as a true obstacle was increased to reduce the number of false positives. Unfortunately, the similarity between the texture and colour of the baby stroller with the background of the advertising banner from the camera’s perspective resulted in a potential obstacle being detected with not very high confidence and discarded by the new updated version of the segmentation module. * Failure of the provider of the perception system to timely distribute a software update to fix a known safety flaw or security vulnerability of the AI system. For example, if the harm had been caused due to a flaw in the segmentation mechanism, the manufacturer could have released a software update to address it by implementing a sensor fusion approach that included range-based sensor data, which would have allowed the 3D volume of the stroller to be detected as an obstacle, and be avoided by the path planning system. Similarly, if the harm had been produced as a result of the malicious exploitation of an adversarial machine learning vulnerability, the manufacturer could have released a software update to mitigate it by, for example, updating the AI model with a more resilient version trained with adversarial samples. * Failure of the operator of the system to timely apply the software update that could have been made available by the manufacturer to fix the safety flaw or security vulnerability that led to the harm described in this scenario. * Failure of the remote human operator to properly supervise the operation of the robot fleet. The failure may be due to inadequate supervision by the human operator (i.e., incorrect compliance with the human oversight mechanisms defined by the provider), or to defects in the human-robot interfaces (i.e., deficiencies in the human oversight mechanisms defined by the cleaning robot producer). #### 4.2.4 Potential liable parties The autonomous cleaning robots are very complex systems with many different internal components based on a variety of AI models, which affect each other and are usually developed and integrated by different parties or subcontractors. The faulty behaviour or the defect can be in one of the components, in several components, or in a faulty integration of these components. Consequently, there could be multiple liable parties within the complex supply chain involved in the development of the cleaning robots. Therefore, potentially liable parties include: * Final producer of the cleaning robots. * Provider of individual AI components integrated in the cleaning robots (e.g. navigation, perception systems such as vision component, path planning, low-level controllers and operational interfaces). * Professional user or operator: the municipality, or a company providing the service to the municipality, deploying the cleaning robot services in the urban area. * Adversaries that attack the system by exploiting vulnerabilities in the AI components (e.g., adversarial machine learning) or in the broader software and hardware surface (e.g., buffer overflows). #### 4.2.5 Obtaining compensation As described in the scenario, there may be multiple alternative or cumulative reasons for the damage, including low confidence detection, cyberattacks against the AI models or the underlying software and hardware architecture, etc. In the context of our work, they should all be assessed by means of expert opinion to prove fault or defect. For instance, an expert could determine that the result of the perception system seems to be wrong at the time of the collision, since the stroller does not seem to appear in some list of detected objects (if available). The expert may thus be able to prove that the stroller was not properly detected (without indicating the cause). She may also be able to discard that the sensors were jammed or spoofed since the raw data seems correct (raw data should be interpretable). The expert could further suppose a correlation between such detection failure and the control decision of the robot to move forward until colliding with the stroller. This may allow the claimant to establish *prima facie* evidence. However, proving correlation does not allow discarding alternative causes of the damage (e.g., the stroller could have moved towards the robot abruptly because it was unattended without a brake). With regard to the opacity of AI systems, it may be impossible to infer a clear causal link between a specific input and the harmful output. Concerning the unpredictability feature, it is possible that the same robot, in an almost identical scenario, but with slight variations in lighting, would have behaved correctly (which could be used as evidence in favour of the defendant). It is worth noting that, in any case, the expert would require access to the the logs of the cleaning robot and technical information about the internal AI systems in order to be able to conduct a forensic analysis of the incident and reconstruct the facts that led the robot behave in the way it did. This type of information is often only available to the manufacturer of the robot. ### 4.3 Autonomous Delivery Drones #### 4.3.1 Characteristics of the AI system and operational environment A fleet of autonomous delivery drones (a.k.a. unmanned aerial vehicles or unmanned aircraft) is used to transport small and medium-sized packages (maximum 15 kg) to people’s homes in rural and suburban areas, where there are sufficiently large landing areas (of at least 2m2 due to the size of the drones) to release cargo safely and without interaction with end-users. The drone is a multirotor quadcopter with horizontally aligned propellers. The drones are autonomous in the sense that the operator loads the cargo into the drone, establishes the final destination, and the drone is capable of performing autonomous vertical take-off and landing, navigating to destination, dropping off the parcel and returning to origin without the need for a pre-defined key points route. They are capable of detecting and avoiding possible obstacles within the planned local route (e.g., birds, high voltage lines, trees, etc.), being robust against moderate wind disturbances. The drones are equipped with multiple sensors and communication systems. Inertial Measurement Units (IMUs) are used to calculate the orientation, altimetric pressure, velocity, rotation rate, angular velocity and tilt, linear motion and heading of the drone. GNSS is used to perform global localization and navigation. Light Detection and Ranging (LiDAR) sensors and digital cameras are used as the input to sense the environment, detect obstacles, and ensure a clear and safe landing zone. The drones have short-range communication systems within the visual line of sight (VLO) with the operator, but in order to continue monitoring the system beyond the visual line of sight (BVLO), they also have 3G/4G/5G cellular communications. The drones make use of multiple AI components, each one responsible for a particular task, e.g., scene understanding and obstacle detection, autonomous localization and navigation, etc. They include four main operation modes: (1) global planning, (2) take-off, (3) global and local navigation and (4) landing. The most critical tasks are vertical take-off and landing, where the drone continuously monitors that the planned local trajectory is clear of any obstacles. Depending on the weight of the cargo and wind conditions, the delivery mechanism may involve landing the drone or landing the delivery by hovering a few meters above the delivery spot while releasing a wire with the delivery tethered to it until it reaches the ground. The operator must comply with all requirements established by applicable civil aviation regulations including the verification that the lighting and weather conditions for the intended trip (outbound and return) are within the specifications provided by the provider. They must check weight and dimensions of the cargo, the battery status and available range, and verify that the landing conditions at the destination are as required by the provider’s specifications. Only if all required conditions are met, the drone can be loaded with the cargo and launched for delivery. Thanks to the mobile communication interface, the operator can monitor the entire process remotely. Some examples of current developments in autonomous delivery drones technology are depicted in Fig. [6](#S4.F6 "Figure 6 ‣ 4.3.1 Characteristics of the AI system and operational environment ‣ 4.3 Autonomous Delivery Drones ‣ 4 Case studies ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"). ![From left to right, three examples of the current state of this kind of technology: the systems developed by Wing ](https://media.arxiv-vanity.com/render-output/7078816/x6.png) Figure 6: From left to right, three examples of the current state of this kind of technology: the systems developed by Wing [[47](#bib.bib24 "Better delivery. aircraft that scale. automation that scales.")], Amazon Prime Air [[2](#bib.bib23 "Amazon prime air prepares for drone deliveries.")] and Zipline [[50](#bib.bib22 "Instant logistics.")]. #### 4.3.2 Description of the events leading to damage The drone is approaching a suburban area to deliver a cargo. The day is a bit windy and cloudy, so the delivery mechanism selected by the operator was by landing the drone. Once the delivery point has been detected and checked that it is clear of obstacles, the drone starts the vertical landing manoeuvre. A child from the neighbouring house is playing in his garden and, when he hears the drone approaching, he suddenly runs towards it. The vertical landing is not properly aborted, and the drone eventually hits the child. As a result of the impact, the drone’s control system becomes unstable and the drone crashes into a nearby car parked in the street. The child is seriously injured, and the car undergoes considerable damage. #### 4.3.3 Potential causes of the accident The accident could have been caused by any of the following issues: * A flaw in the AI-based obstacle detection system that caused a failure in the detection of the child approaching to the landing zone. This failure could have been produced due to multiple reasons. For example, some bias or mislabelling in the training data of the perception system related to small objects on the ground or inadequate lighting conditions. * A flaw in the AI-based decision-making and control systems of the drone that caused it not to react in time to the sudden presence of the child, properly correcting the trajectory. This could have been due to several reasons, such as insufficient reaction time of the control system or because of stability problems due to inappropriate wind conditions. * Failure of the AI systems to handle evolving unfavourable weather conditions. The lighting and weather conditions were adequate at the time the operator made the decision to deliver the parcel with the autonomous drone but worsened as the drone approached its destination. No self-diagnosis functions were incorporated into the autonomous delivery drone and the mission was not remotely aborted by the operator. Poor lighting conditions would have reduced the accuracy of the perception systems. Inappropriate wind conditions would have compromised the control system’s ability to perform obstacle avoidance manoeuvres. * A deliberate cyberattack on the drone’s systems, targeting the drone’s sensors (e.g., sensors blinding, jamming or spoofing), exploiting an AI cybersecurity vulnerability (e.g., an adversarial machine learning attack to the camera-based perception system) or exploiting a cybersecurity vulnerability in the broader hardware and software architecture (e.g., unauthorised access to the internal systems of the drone through the wireless interface or cellular communications). #### 4.3.4 Potential liable parties Similarly to the cleaning robots described in Section [4.2](#S4.SS2 "4.2 Autonomous Urban Cleaning Robots ‣ 4 Case studies ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"), autonomous delivery drones are very sophisticated systems with many different internal components, some of them making use of different kind of AI models. These systems are designed to interact with each other, and can be developed and integrated by different parties and subcontractors. In this context, the origin of the faulty behaviour or defect can be in one of component, in several components, or in their specific integration. Therefore, there are multiple potentially liable parties, including: * Final producer or manufacturer of the autonomous delivery drones. * Provider or manufacturer of individual AI components integrated in the drone (e.g., localization, navigation, perception systems, low-level controllers, take-off, landing and delivery mechanisms, and operational interfaces). * Professional user or operator: the company providing the delivery service deploying the autonomous delivery drones in the rural and suburban areas. * Adversaries that attack the system by exploiting vulnerabilities in the AI components (e.g. adversarial machine learning) or in the broader software and hardware surface (e.g. jamming or spoofing of sensor signals, or buffer overflow vulnerabilities in the software implementation). #### 4.3.5 Obtaining compensation Let us consider that an expert opinion has access to the inputs/outputs and internal states of the perception system a few minutes before the accident and during the accident, as well as the technical documentation to enable its correct interpretation. One of the first hypotheses to investigate would be the case that the child was not correctly detected by the perception module. For example, if the system records some list of moving obstacles detected on the ground, the expert could check whether or not any obstacle could be associated with the boy on that list. If no obstacles were detected before or during the accident, this could be presented as evidence of fault or defect, and may also serve to presume a causal link to the damage. This is the best case scenario, but this internal list may not be accessible or available. The perception system could provide other types of information represented in a format not directly interpretable by humans, as input to the decision-making, path planning and control modules. If it is possible to demonstrate that the child was properly detected (e.g., with the aforementioned list of obstacles), then the expert would have to investigate why the drone’s decision making, path planning and control systems did not avoid the collision. Some internal information would be needed on environmental conditions measured by or communicated to the drone (e.g., wind speed, lighting conditions), drone status (e.g., height, rotors speeds, pose), intended route (e.g., the local trajectory of the drone) and control actions (e.g., target and current speed of all rotors). If all this information were available, it would even be possible to reconstruct the accident and verify that the planned route and control actions did not prevent it. This could be presented as evidence of fault or defect, and link to the damage. Again, this is a favourable scenario, but this information may not be directly available, or not directly interpretable. It could be the case that the entire system is based on a completely opaque end-to-end model (from the raw sensor inputs to the final control actions on the rotors), so that intermediate representations are neither available nor interpretable. Under these circumstances, it is possible that an expert may be able to establish some correlation between some possible alternative causes and the damage caused to the child and the parked vehicle. However, the lack of causality and the opacity of the the drone’s AI systems can make it impossible to establish a clear causal link between any of the possible alternative causes and the accident. In addition, if an analysis of raw sensor data (e.g., IMUs, cameras, LiDAR, GNSS) shows that the sensors were jammed or spoofed, it would be very difficult to determine the source of the attack as sensor data would be compromised. ### 4.4 Robots in education #### 4.4.1 Characteristics of the AI systems and operational environment Socially assistive robots (a.k.a. social robots) typically are used in complex physical environments with the purpose to interact with humans [[15](#bib.bib8 "Socially intelligent robots: dimensions of human–robot interaction")]. They can be embedded into the social environment and autonomously (or with limited human supervision) navigate the physical space and interact with humans (usually in restricted settings). They usually have explicit or implicit anthropomorphic features and they can perform across a spectrum of behavioural policies which typically depend on their morphology. Despite the diverse characteristics of robots, for the present use case we consider a robot that has a configuration that would allow its effective use in educational contexts to support the socio-emotional skills of autistic children. In this sense, the robot is mobile and includes perception components, navigation, facial, speech and emotion recognition, localization, decision-making, mapping and path planning systems, manipulation, grasping, expressive communication and other AI-based systems. It is about 1.30m tall and it has arms, mainly for gesturing, as an expressive behaviour. The robot is equipped with multiple sensors to detect the environment, including cameras, 3D sensors, laser, sonar infrared, tactile, microphones and inertial measurements units. It is equipped with a tablet as an alternative means for communication. It can perceive and process speech using AI systems, including a module for verbal communication with expressive voice. It is capable of detecting obstacles, people, and facial expressions using AI-based computer vision algorithms. Lastly, it is equipped with an AI-based cognitive architecture which combines task-related actions and socially adaptive behaviour for effective and sustained human-robot interaction. Some examples of similar prototypes already tested in operational environments are depicted in Fig. [7](#S4.F7 "Figure 7 ‣ 4.4.1 Characteristics of the AI systems and operational environment ‣ 4.4 Robots in education ‣ 4 Case studies ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"). In addition, for the robot to be tailored for that specific application (autistic children in school environments) it comes with pre-designed interventions for cognitive engagement (task-oriented). The robot is capable to adapt its social behaviour according to the child’s personal abilities and preferences in the context of autism. ![From left-to-right, top-to-bottom, five different robotic platforms in the context of education: De-Enigma ](https://media.arxiv-vanity.com/render-output/7078816/x7.png) Figure 7: From left-to-right, top-to-bottom, five different robotic platforms in the context of education: De-Enigma [[16](#bib.bib21 "Playfully empowering autistic children")], Pepper [[5](#bib.bib20 "Socially intelligent robot helps autistic somerset pupils")], QTrobot from LuxAI [[30](#bib.bib19 "QTrobot. expressive humanoid social robot.")], Nao robot [[49](#bib.bib18 "Could social robots facilitate children with autism spectrum disorders in learning distrust and deception?")] and Haru [[12](#bib.bib13 "Child-robot collaborative problem-solving and the importance of child’s voluntary interaction: a developmental perspective")]. Focusing on the need for inclusion and preparation of autistic children for an independent life, a mainstream primary school school unit decided to increase the number of admissions for high functioning autistic children. However, the staff is not adequate to undertake individual support for the autistic children. For this reason, the director of the school decides to introduce one robot per class for personalized interaction with the autistic children for the improvement of their social skills. The school signs a contract with the company providing robotic educational services using the robot (as well as all integrated AI systems) to provide the educational services at the school. The robots are installed in the classrooms for regular personalized interventions with the autistic children and for voluntary interaction during children’s free time. The robots are mobile, and they can navigate dedicated space during the children’s free time if a child requests so. The robot learns from the interaction with the autistic children and adapts its social behaviour. While during the lesson time the robot is available only for children with autism to perform personalized interventions, during the free time, any child of the school can interact with the robot at dedicated spaces. #### 4.4.2 Description of the events leading to damage In this use-case, we focus on harm which might be caused because of the adaptive behaviour of the robot. Some property damage may also occur. We propose three different scenarios. Scenario 1: Physical harm and property damage towards a child with darker skin. The robot fails to perceive a child with darker skin, and it causes physical harm to the child. The blow caused by the robot also resulted in the breakage of the child’s glasses. Scenario 2: Physical harm and property damage towards a child that behaves in an unexpected way. The robot fails to respond in an appropriate way to an autistic child that might have unexpected behaviour and hits the child. The blow caused by the robot also resulted in the breakage of the child’s glasses. Scenario 3: Long-term psychological harm towards a neurotypical child. During children’s free time at the school, a neurotypical child interacts with the robot on a regular basis. The robot adapts to the child’s needs and requests which subsequently leads the child to develop potentially medically recognised pathological such as addictive behaviour towards the robot (e.g., increase preference of the child to interact with the robot rather than with humans causes an abnormal socio-emotional development), depression (e.g., social isolation can negatively influence psychological health leading to depressive symptoms), emotional distress (e.g., the inappropriate robot response in scenario 2 leads to psychological trauma), or abnormal cognitive and socio-emotional development and dependencies. #### 4.4.3 Potential causes of the accident For the scenarios 1 and 2 the damage could have been caused by the following issues: * Flaw in the robot’s perception module that does not perceive the child due to biases in the robot’s perception system with respect to children with certain physical characteristics. * Flaw on the decision-making and path planning modules of the robot which fails to adapt to the child user, from a prolonged period of interaction with children with certain behavioural characteristics. * Flaw on the control module which fails to consider the physical and behavioural differences of the child user. For the scenario 3 the damage could have been caused by the following issue: * Robot adaptation: the adaptation module of the robot embeds an intrinsic motivation element which contributes to the human-robot sustained interaction. The robot’s internal goal to remain in an optimal level of comfort for the child-user contributes to its adaptation to the specific child’s characteristics, needs and behaviours. This robot behaviour develops a closed loop of cognitive and socio-emotional interaction with the child that might lead to the child’s addiction to the specific robot behaviour. In a long-term interaction the child might exhibit a preference for interaction with the robot rather than human social agents. In that case, the child and the robot develop in a mutual adaptation loop. #### 4.4.4 Potential liable parties Social robots can be very complex systems with many different AI-based components that are integrated in a single platform. These components fall into three main categories (i) perception, decision-making and planning, and action and execution, and they need to interact with each other. Similar to the use cases presented in Sections [4.2](#S4.SS2 "4.2 Autonomous Urban Cleaning Robots ‣ 4 Case studies ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof") and [4.3](#S4.SS3 "4.3 Autonomous Delivery Drones ‣ 4 Case studies ‣ Liability regimes in the age of AI: a use-case driven analysis of the burden of proof"), the origin of the faulty behaviour or defect can be in one of the components, in multiple components or in their integration. In addition, for the specific scenario, there might be modules that refer to a task-specific robot behaviour and other modules that relate to the robot’s social interaction. For the task-specific robot behaviour, a separate company might be involved who are specialists in pedagogy and autistic children. As such, there are multiple potentially liable parties including: * Manufacturer of the robot who is also the provider of the robot’s AI systems. * Providers of the AI modules integrated into the robot before it is placed on the market. * Provider of the educational system which provides the task-specific modules. * Company using the robot to provide educational services. * The school that makes use of educational robotic services. #### 4.4.5 Obtaining compensation For product-based liability the victim should prove defectiveness of the robot and link with the damage. For fault-based liability, the victim should prove negligence by some of the potentially liable parties and link with the damage. As described above the compensation claims can be directed against the robot manufacturer, the providers of some of the AI components, or the user (the company providing the educational services). Although the victim could also claim against the school, it should be noted that the robot was intended to function without supervision by a teacher, and therefore courts are unlikely to uphold such a claim. On the one hand, courts may or may not infer defect or fault and causality from the fact that the robot caused the relevant injuries. The fact that the AI-systems influencing the robot’s behaviour adapted during the latter’s autonomous operation (e.g., by means of self and continuous learning) may put into doubt such an inference. Courts may namely take into account that the robot’s behaviour depends on various circumstances that may be considered unforeseeable for the provider or user (namely the precise operating environment, human interaction and the input data the robot is exposed to). Demonstrating the extent to which the robot’s mechanism of adaptation to the behaviour of the children it interacts with led to a change in behaviour not foreseen by the provider (resulting in harm) is quite complex. On the victim’s side, it would require, at the very least, expert knowledge and access to a considerable level of documentation regarding both the system design and the logs while it was in operation. On the other hand, regarding potential issues not linked with self and continuous learning, proving defect or negligence, and causal link with the damage, would require expert analysis of the design and functioning of the robot, the relevant AI-systems, or the human oversight mechanisms foreseen for users. In addition, the injured parties should be entitled to access the necessary level of information for the purposes of their claim, including the aforementioned logs while the robot was in operation. On the basis of such information, an expert may notably be able to determine whether the result of the robot’s perception system is correct at the time of the accident, for instance by checking whether the physically injured child appears in the list of detected objects. The expert may also review the relevant control decisions of the robot, e.g. the decisions to interact in a certain way with the affected children, or the decision to actuate certain movements. The analysis may also inform the supposition of a correlation between, for instance, a detection failure and a relevant control decision of the robot. Regarding the role of the user, with a sufficient level of access to information, such as the user oversight requirements (e.g., instruction of use), and data logged during the operation, the victim could establish a possible failure to comply with the user’s standard of safety or care. All these elements could serve as evidence to prove defect or negligence. However, due to the complexity of the system as a whole, as well as the lack of causality, unpredictability or opacity of AI systems, it would be very complex, even for an expert, to establish clear causation between the specific operating conditions and the harmful event. For example, it would be quite complex to exclude other elements such as inappropriate behaviour on the part of children or supervisors. 5 Conclusions -------------- In this work, we presented three hypothetical use cases of products driven by multiple AI systems, operating autonomously in non-private environments, causing physical harm (including one scenario involving mental health) and property damage to third parties. These use cases, selected based on a set of inclusion criteria, represent AI technologies with a high level of readiness operating in real-world scenarios. We described them realistically and with a high degree of technical granularity. In addition, we explored the scenarios from the perspective of product and fault liability regimes, identifying the possible causes of the accident, the liable parties, and describing the technicalities underlying the process of obtaining compensation from the victim. Through this process, we highlighted the technical difficulties that an expert opinion would face in trying to prove defect or negligence, and the causal link to the damage. This is due to certain characteristics that some AI systems intrinsically contain, namely lack of causality, opacity, unpredictability and self and continuous learning We attempted to provide certain degree of complementarity between the three use cases to allow the analysis of different factors when dealing with the burden of proof. As a further contribution, we identified and described these specific AI features in detail, so that they can serve as a basis for other studies related to legal issues and AI. Our analysis indicates that liability regimes should be revised to alleviate the burden of proof on victims in cases involving AI technologies. \acks The authors acknowledge funding from the HUMAINT project at the Digital Economy Unit at the Directorate-General Joint Research Centre (JRC) of the European Commission. The authors also acknowledge the valuable contributions of Mr. Bernd Bertelmann and Ms. Ioana Mazilescu. Disclaimer The views expressed in this article are purely those of the authors and may not, under any circumstances, be regarded as an official position of the European Commission.
aeadebd8-ef0a-4d36-8768-ea5bea098cfc
trentmkelly/LessWrong-43k
LessWrong
Do we have a plan for the "first critical try" problem? In list of lethalities, it seems that the two biggest ones are: * A.3 We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again. * B.1.10 On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions. My understanding is that interpretability is currently tackling the second one. But what about the first one? It seems a bit tricky because it is a powerful outside view argument. It is incredibly rare for software to work on the first test. ML makes it even more difficult since it isn't well suited to formal verification. Even defense in depth seems unlikely to work (on the first critical try, there is likely only one system that is situational aware). The only thing I can think of is making the AGI smart enough to takeover the world with the help of its creators but not smart enough to do so on its own or to solve it's own alignment problem (i.e. it does not know how to improve without goal drift). I also suppose non-critical tries give some data, but is it enough? What does the playing field for the first critical try look like?
ea7fe8d2-b86d-4d38-ae36-8381e9ba053f
trentmkelly/LessWrong-43k
LessWrong
Maxent and Abstractions: Current Best Arguments This post is not-very-distilled and doesn’t contain much background; it’s intended for people who already have the context of at least these four posts. I’m putting it up mainly as a reference for people who might want to work directly on the math of natural abstractions, and as a technical reference post. There’s various hints that, in most real-world cases, the distribution of low-level state given high-level natural abstractions should take the form of a maximum entropy distribution, in which: * The “features” are sums over local terms, and * The high-level variables are (isomorphic to) the Lagrange multipliers More formally: we have a low-level causal model (aka Bayes net) P[XL]=∏iP[XLi|XLpa(i)]. Given the high-level variables XH, the distribution of low-level variable values should look like P[XL|XH]=1ZP[XL]eλT(XH)∑ifi(XLi,XLpa(i)) … i.e. the maximum-entropy distribution subject to constraints of the form E[∑ifi(XLi,XLpa(i))|XH]=μ(XH). (Note: λ, fi, and μ are all vector-valued.) This is the sort of form we see in statistical mechanics. It’s also the form which the generalized Koopman-Pitman-Darmois (gKPD) theorem seems to hint at. I don’t yet have a fully-satisfying general argument that this is the main form which abstractions should take, but I have two partial arguments. This post will go over both of them. Maxent Telephone Argument Two different nested layers of Markov blankets on the same underlying causal DAG Quick recap of the Telephone Theorem: information about some variable X passes through a nested sequence of Markov blankets M1,M2,…. Information about X can only be lost as it propagates. In the limit, all information is either perfectly conserved or completely lost. Mathematically, in the limit P[X|Mn]=P[X|Fn(Mn)] for some F such that Fn(Mn)=Fn+1(Mn+1) with probability approaching 1 as n→∞; F is the perfectly-conserved-in-the-limit information carrier. In this setup, we can also argue that the limiting distribution limn→∞P[X|Mn] should h
9453fee5-b240-4d63-b514-b9e85919c296
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1381 ( This post is an elaboration on “tractability of discrimination” as introduced in section III of Can we efficiently explain model behaviors? For an overview of the general plan this fits into, see Mechanistic anomaly detection and Finding gliders in the game of life .) Background We’d like to build AI systems that take complex actions to protect humans and maximize option value. Powerful predictive models may play an important role in such AI, either as part of a model-based planning algorithm or as part of a learned policy. Either way, this introduces the risk that we will select actions predicted to lead to good-looking outcomes without being able to distinguish predicted outcomes that are actually good from predicted outcomes where our measurements were corrupted. If tampering with sensors is easier than actually achieving our goals then we may inadvertently build very powerful systems taking creative actions to corrupt our measurements. If we iteratively improve and harden our measurements, this can lead to AI systems that work well for a long time before abruptly and catastrophically disempowering humanity. I consider this one of the conceptually cleanest alignment problems, and I expect similar dynamics to play a role in realistic alignment failures even if those failures aren’t this simple. ARC’s current work is focused on decisive solutions to this problem, though it looks like the same approach may also apply directly to identifying treacherous turns more generally. Are distinct mechanisms enough? ARC has been looking for training strategies that avoid this problem by leveraging only the fact that sensor tampering is “weird,” i.e. conceptually distinct from the normal mechanism giving rise to predictions of good-looking outcomes on the training distribution. More specifically, at training time our model predicts coherent sensor readings because it predicts that sensors reflect coherent structure in the world. But if someone tampers with sensors to show a convincing fiction, then the predicted observations are coherent because the fiction was designed to look coherent. This suggests that different mechanisms are responsible for (actions that lead to good-looking outcomes for the normal reasons) and (actions that lead to good-looking outcomes via sensor tampering). If we are able to detect that difference by looking at the internal behavior of a predictor, then we may be able to use that to avoid sensor tampering. It’s unclear if “distinct mechanisms” is a strong enough assumption to avoid sensor tampering. We hope that it is, and so we are trying to define formally what we mean by “distinct mechanisms” and show that it is possible to distinguish different mechanisms and that sensor tampering is always a distinct mechanism. If that fails, we will need to solve sensor tampering by identify additional structure in the problem, beyond the fact that it involves distinct mechanisms. Roadmap In this post I want to explore this situation in a bit more detail. In particular, I will: Describe what it might look like to have a pair of qualitatively distinct mechanisms that are intractable to distinguish. Discuss the plausibility of that situation and some reasons to think it’s possible in theory. Emphasize how problematic that situation would be for many existing approaches to alignment. Discuss four candidates for ways to solve the sensor tampering problem even if we can’t distinguish different mechanisms in general. Note that the existence of a pathological example of distinct-but–indistinguishable mechanisms may not be interesting to anyone other than theorists. And even for the theorists, it would still leave open many important questions of measuring and characterizing possible failures, designing algorithms that degrade gracefully even if they sometimes fail, and so on. But this is particularly important to ARC because our research is looking for worst-case solutions, and even exotic counterexamples are extremely valuable for that search. 1. What might indistinguishable mechanisms look like? Probabilistic primality tests The best example I currently have of a “hard case” for distinguishing mechanisms comes from probabilistic primality tests. In this section I’ll explore that example to help build intuition for what it would look like to be unable to recognize sensor tampering. The Fermat primality test is designed to recognize whether an integer n is prime. It works as follows: Pick a random integer a < n. Compute a^n mod n. This can be done in time polylog(n) via iterated squaring. Output “pass” if a^n = a (mod n). A prime number always passes. In almost all cases where this test passes, n is prime. And you can eliminate most false positives by just trying a second random value of a. But there are a few cases (“Carmichael numbers”) for which this test passes for most (and in fact all) values of a. Primes and Carmichael numbers both pass the Fermat test. This turns out to be equivalent to saying that “For all primes p dividing n, (p-1) divides (n-1).” For primes this happens because n is a prime and so there is only one prime divisor p and p-1 = n-1. For Carmichael numbers it instead happens because (p-1) and (n-1) are both highly divisible and a bunch of favorable coincidences occur. We can think of this as building a test that’s supposed to detect factors of n, and then there happens to be a ton of cancellation so that we don’t see anything. Suppose that we observed the Fermat testing passing in a bunch of random cases and formed the abstraction of “prime” to explain that phenomenon. Then we try to use AI to find a large prime that satisfies some additional property P. If we do this by using the Fermat test as our definition of primality, we may end up getting a Carmichael number, if the property P is rare amongst primes but common amongst composites. In reality there are probabilistic primality tests that don’t have false positives. But we can imagine a world where gradient descent is able to find the Fermat primality test without us having any algorithm that can learn one of those stronger tests. We’ll revisit the plausibility of this situation in section 2. The analogy An analogous situation could make it hard to detect sensor tampering: We have sensors that are often predicted to show smiling human faces (= the Fermat test often passes for random numbers n) We introduce the abstraction of “happy human” to explain those correlated observations (= we introduce the abstraction of ”n is prime” to explain the Fermat test passing) The model also predicts smiling human faces if an action would tamper with sensors (= the Fermat test also passes if n is a Carmichael number) We use AI to select for tractable actions that lead to predictions of happy human faces and end up with sensor tampering (= we select for Fermat test passing with an additional constraint P and end up with Carmichael numbers) This is a problem if the best way to act competently is to train a sophisticated predictor and then use it for planning, and if all sophisticated predictors have this unfortunate mechanism-indistinguishability property. The problem can materialize either when we build a model-based algorithm that plans against such a predictor; or when SGD learns a neural network internally which internally selects actions based on a similar predictor. It’s worth emphasizing that this feels like an extremely weird situation. In some sense the predictor “doesn’t know” whether there are really smiling humans or whether the sensors are tampered with, and couldn’t answer that question honestly even if it wanted to. You could imagine calling this “not an alignment failure” (or at least not an ELK failure) depending on how you define blurry words like “knowledge” and “intent.” But at the end of the day we would still end up with a competent AI which produced creative and competent actions that disempower humanity, and I think we should try to avoid this failure whatever you call it. If we had a clear definition of “knowledge” then I could imagine drawing a bright line and saying this isn’t an alignment failure, in the same way I would if e.g. my AI started a nuclear war in the same way that a similarly-competent human would. But I think you should be very hesitant to do that while “knowledge” is such a vague concept, because I think it facilitates sweeping all failures into the gaps that we don’t understand. Instead our priority should be to understand and ideally rule out this strange potential situation where the AI creatively and competently kills us all without even knowing that it does so. 2. Are distinct mechanisms efficiently distinguishable? ARC’s current “plan A” is that any pair of sufficiently distinct mechanisms can be efficiently distinguished. This would be really convenient for a bunch of reasons and it looks plausible, so we’ll keep investigating it until we feel we’ve ruled it out. In particular, right now I don’t have even a single example of a function f such that (i) there are two clearly distinct mechanisms that can lead to f(x) = 1 for any particular input x, (ii) there is no known efficient discriminator for distinguishing those mechanisms for a given input x. I would really love to have such examples. That said, there are still two big reasons that I’m skeptical about the conjecture that distinct mechanisms are always distinguishable: (a) it’s a really striking claim for which a failure to find counterexamples isn’t very strong evidence, and (b) in known examples like primality testing it still seems easy to imagine the situation where we can find the mechanism but not the discriminator, i.e. we haven’t yet found an automated way to learn a discriminator. Overall if I had to guess I’d say maybe a 20% chance that there is a formal version of “all distinct mechanisms are distinguishable” which is true and sufficient to rule out sensor tampering. This is still high enough that it’s a significant priority for me until ruled out. A. This is a striking claim and judging counterexamples is hard Any universally-quantified statement about circuits is pretty striking — it would have implications for number theory, dynamical systems, neural nets, etc. It’s also pretty different from anything I’ve seen before. So the odds are against it. One piece of evidence in favor is that it’s at least plausible: it’s kind of weird for a circuit to have a hidden latent structure that can have an effect on its behavior without being detectable. Unfortunately there are plenty of examples of interesting mathematical circuits (e.g. primality tests) that reveal the presence of some latent structure (e.g. a factorization) without making it explicit. Another example I find interesting is a determinant calculation revealing the presence of a matching without making that matching explicit. These examples undermine the intuition that latent structure can’t have an effect on model behavior while remaining fully implicit. That said, I don’t know of examples where the latent structure isn’t distinguishable. Probabilistic primality testing comes closest, but there are in fact good primality tests. So this gives us a second piece of evidence for the conjecture. Unfortunately, the strength of this evidence is limited not only by the general difficulty of finding counterexamples but also by the difficulty of saying what we mean by “distinct mechanisms.” If we could really precisely state a theorem then I think we’d have a better chance of finding an example if one exists, but as it stands it’s hard for anyone to engage with this question without spending a lot of time thinking about a bunch of vague philosophy (and even then we are at risk of gerrymandering categories to avoid engaging with an example). B. Automatically finding a good probabilistic primality test seems hard The Fermat test can pass either from primes or Carmichael numbers. It turns out there are other tests that can distinguish those cases, but it’s easy to imagine learning the Fermat test without being able to find any of those other superior tests. To illustrate, let’s consider two examples of better tests: Rabin-Miller : If a^(n-1) = 1 (mod n), we can also check a^(n-1)/2. This must be a square root of 1, and if n is prime it will be either +1 or -1. If we get +1, then we can keep dividing by 2, considering a^(n-1)/4 and so on. If n is composite then 1 has a lot of square roots other than +1 and -1, and it’s easy to prove that with reasonably high probability one of them will appear in this process. Randomized AKS : If n is prime and X is an indeterminate, then (a+X)^n = (a^n + X^n) = (a + X^n) mod n. This condition is hard to evaluate, but if we arbitrarily define X^r = 1 for some small number r then we can compute (a + X)^n mod n by iterated squaring in time O(r log n). If n is composite, it turns out there is a high probability that (a+X)^n != (a + X^n) mod n. While these tests are structurally similar to the Fermat test, there’s no obvious way to automatically transform the Fermat test into either of these stronger formats. And so if we had learned the Fermat test, it’s not clear we’d have any way to find one of the stronger tests without learning them. Moreover, while these tests are somewhat simple, they are more complex than the Fermat test, and so this learning process might be much harder than the problem of learning the Fermat test itself. To avoid this problem we’d like to specify a way to represent these stronger tests that uses fewer (additional) parameters than the Fermat test itself. As a stylized example, you could imagine specifying a stronger test by pointing to particular parts of the Fermat test and saying “randomize these parts.” As long as the number of parameters describing “which parts to randomize” was smaller than the number of parameters in the Fermat test itself, we’d be in business. While there’s no obvious representation like that, it seems plausible to me that we could find, particularly for the AKS test. This would be very suggestive that we could find a stronger test as quickly as we can find the Fermat test, but wouldn’t be totally decisive since e.g. gradient descent could have an easier time with one learning problem than the other (and that will depend on more details). Of course it’s particularly plausible that you could have the Fermat test but no robust test because that was humanity’s situation for many years! An automated strategy for extending the Fermat test to distinguish primes from Carmichael numbers would appear to teach us something fundamental about primality testing, that wasn’t known to humanity until the 70s, without making reference to any facts about numbers . So that seems like a tall order. I’m focused on the primality testing example in large part because it’s the case where the situation seems most hopeless. If we could find one in this case I would become meaningfully more optimistic about the overall conjecture. 3. Being unable to distinguish mechanisms is bad news If it’s impossible to distinguish distinct mechanisms, then that rules out approaches to ELK based on mechanistic anomaly detection alone. But the situation seems much worse than that, and in fact it seems like any approach to alignment is going to need to exploit some additional property of sensor tampering to rule it out. In particular: The kinds of approaches discussed in Eliciting latent knowledge are complete non-starters. All those approaches try to define a loss function so that the strategy “answer questions honestly” gets a low loss. But if you can’t learn to recognize sensor tampering then it doesn’t matter how low a loss you’d get by answering questions honestly, gradient descent simply can’t learn how to do it. Analogously, if there’s no simple and efficient primality test, then it doesn’t matter whether you have a loss function which would incentivize primality testing, you’re not going to be able to do it. Avoiding sensor tampering by interpretability runs into exactly the same problem. If there’s no efficient algorithm for recognizing sensor tampering, then no matter in what sense you “understand” what the model is doing, you still can’t tell whether it thinks an action is good because of sensor tampering or because of doing the task as intended. Interpretability helps us deal with the case where we have no loss function to incentivize the discriminator we need, but it won’t help if there simply doesn’t exist any efficient discriminator. Approaches like debate or amplification are unable to help if the best way to plan is to use a learned model for which discrimination is impossible. When we try to apply these methods we will use something like imitative generalization, performing a search over augmented-human-legible hypotheses. But in fact the best hypotheses look like opaque predictors, and so we are back to trying and failing to solve ELK for those hypotheses. This is discussed a bit more here in the ELK doc and was the motivation for focusing on ELK. On the bright side, when these approaches fail it may simply lead to uncompetitive systems rather than leading to sensor tampering, even in the worst case. But on the downside, sufficiently large competitiveness problems may be catastrophic without coordination. Recursive reward modeling can solve this problem (where debate can’t) only if a human and AI team can harden sensors against manipulation, including in the case where the “attacker” can use sophisticated predictors for which discrimination is hard but “defenders” can’t. I think this is very plausible, but probably does require substantive additional empirical assumptions; we’ll discuss the approach of “just make sensor tampering impossible” in section 4. Of course this is particularly plausible if we can apply AI systems in ways that don’t give them rich opportunities for sensor tampering, and in general the efficacy of approaches like RRM or debate depends a lot on the political and social environment in which AI systems are deployed. Some work on AI alignment focuses on understanding the inductive biases of SGD, how training shapes values, on the role of “habits” and non-maximizing behavior, on analogies with humans, and so on. But if planning against a sophisticated predictive model is an effective strategy for decision-making then ultimately every one of these approaches must confront the same issue: either by building systems that explicitly use planning against a learned model, or by training systems that need to use this strategy internally. Empirical work mostly seems helpful for grappling with indistinguishable mechanisms insofar as empirical work can help us notice this problem in advance if it’s a real problem. Overall my sense is that if distinct mechanisms can be indistinguishable, then any indefinitely scalable solution to alignment will need to use additional properties of sensor tampering or assumptions about the structure of optimal decision-making. That said, even in relatively pessimistic worlds there may be a long period of time (potentially extending to fairly superhuman systems) for which these issues are surmountable. 4. Approaches to sensor tampering assuming indistinguishable mechanisms In this section we’ll discuss three possible approaches to sensor tampering: Harden sensors such that tampering with them is harder than achieving the intended task. Harden sensors enough that you can’t tamper with them without “trying,” and then argue that if you are “trying” to tamper with sensors then you can discriminate. Try to characterize some other sense in which models “don’t know what’s going on” when mechanisms are indistinguishable. Design algorithms that work as long as the AI “knows what’s going on” and then validate that assumption. Our main hope is to find a way to efficiently distinguish distinct mechanisms. But it still seems worth fleshing out these alternatives. A. Harden sensors so that tampering is harder than the intended task We could design and deploy a lot of redundant sensors, and do science and red-teaming to understand the possible principles by which those sensors can be compromised. In the modern world it’s kind of insane to imagine a human rebellion that not only succeeded but left no trace of itself . Some dynamics make this seem more plausible, at least as a way to avoid the particular obstruction in this post (and therefore open up a much broader universe of “outs”): We can use AI systems to help us build sensors and understand their potential failures. There are potential difficulties in training such systems, but there are a lot of promising options that haven’t been very thoroughly explored and ultimately the obstruction in this post doesn’t speak directly to the difficulty of that problem. What matters becomes not the gap between humans and AI systems, but between “AI systems using opaque predictors for which discrimination is impossible” and “all the other AI.” To address the difficulty in this post we don’t need to build a sensor that reliably tells us if a robot rebellion is occurring, we just need to be able to arbitrate a debate about whether a particular set of recorded actions and observations is potentially the result of sensor tampering, and we can use a very conservative standard. That means we get to comb over observations for evidence of anything avoidably funny. Many plans for tampering with sensors are only scary because they involve creating external agents and delegating the sensor-tampering task to them. But we can closely supervise actions and look for evidence of this kind of delegation. We can also ask friendly systems to perform similar delegation to defend sensors (again modulo complicated questions about how to train such systems). On the other hand, there are two big reasons we’re scared of this kind of approach: It seems highly sensitive to how your predictive models generalize. If a “sensor reading” is simply whatever ultimately ends up in a dataset in a datacenter somewhere, then any successful robot rebellion would constitute impossible-to-detect sensor tampering since it could ultimately erase any evidence recorded by sensors. In practice we expect even deeper forms of underdetermination than “what is the definition of a sensor?” and so we would strongly prefer to have approaches (like mechanistic anomaly detection) that don’t require making strong structural assumptions of this type. It is really scary to imagine a world where in a strong sense all the most capable AI systems “want” to tamper with sensors and are prevented only because the balance of physical power favors the defender. Such a world feels highly unstable and vulnerable to mistaken assumptions or external shocks. This vague unease could be defused by thinking very carefully through the actual risks, but it’s not a direction that is intuitively appealing to us. B. Detect sensor tampering that requires “trying” So far we’ve talked about training a sophisticated predictive model and then imagined an exhaustive or random planner who searches against it. While it seems hard to make sensors so robust that a powerful AI cannot tamper with them, it seems much easier to make sensors so robust that you can’t tamper with them by randomly generating actions . In that case, even if we have a sophisticated predictor which doesn’t distinguish sensor tampering from the intended behavior, we don’t have trouble unless there is also an algorithm which proposes actions that have an elevated risk of leading to sensor tampering. Of course any policy which proposes coherent actions would tend to elevate the probability of successful sensor tampering alongside the probability of successfully doing the task. But we may hope to harden the sensor far enough that this kind of generic prior over competent actions still won’t lead to sensor tampering unless it is specifically driving at that goal. So this suggests a two step plan: Harden sensors enough that in some sense you can’t “accidentally” tamper with them. Argue that the proposer can distinguish actions that are “deliberately” tampering from actions that are trying to do the task as intended, even if the predictor cannot. Both of these steps seem challenging. Conceptually the second step seems like the main blocker — in cases where a predictor can predict success without distinguishing mechanisms, it also seems possible for a proposer to propose successful plans without distinguishing mechanisms. But it’s not at all a trivial link, and e.g. the primality testing example can’t be easily adapted into this format suggesting that it might be a fundamentally different step. One reason you might think that this approach is doomed is that there is always a simple action that tampers with the sensors, namely: build an external agent trying to tamper with the sensors. This gives an absolute lower bound on the probability of accidentally tampering with the sensors no matter how much they are hardened. But if the external agent itself must do something to deliberately tamper, then that still means that there exists an efficient procedure for detecting the sensor tampering and leaves us with a possible out — it seems plausible that this approach can be extended to work for external agents if and only if it works in cases with no external agents. It generally seems hard to write off possible approaches based on counterexamples that involve creating external agents, since those counterexamples can often be handled by separate tactics (we do ultimately need to confront such counterexamples after handling the “base case,” but it seems like a mistake to write off approaches too early based on this difficulty). C. Assume that your AI “knows what’s going on” It would be very strange if (human faces) → (sensors) was analogous to (primality) → (Fermat test), because it suggests that our AI’s grip on the concept of “human faces” is extremely tenuous. Such a model could be reasonably said to “not know” whether there are really humans in the room or if sensors have been tampered with. In section B we argued that maybe this meant the AI didn’t pose a threat, because tampering with sensors is only possible if you actually understand what you are doing. But even if that hope doesn’t work out, we could still try to: Formalize “knowledge” such that the AI systems we are worried about “know” that they are disempowering humanity. Design algorithms that detect sensor tampering that our AI “knows” about. Verify that our AI understands the situation well enough to “know” if it is tampering with sensors, either empirically or by making some additional argument about sensor tampering. As a vague illustration, we could try to argue that the AI systems we are worried about have a certain kind of mechanical understanding of the systems they are predicting. We could show that this kind of mechanical understanding is inconsistent with primality-test-like cases (which are possibly only because “is prime” lacks interesting internal structure and is more like a raw fact about the world), and then we could empirically monitor whether our AI systems actually have this kind of rich understanding. As another illustration, we could try to relate “knowledge” directly to the ability to answer certain kinds of questions, including honestly answering questions about what’s happening in the world, and then carefully monitor how hard it is for our AI to answer those questions so that we’d notice if it became hard. This is more of a vague catch-all than either approach A or B, but I think it deserves attention because the situation of an AI tampering without knowing it feels extremely weird. So there may be room to make sense of that situation and show how it can’t happen. Conclusion It may be possible for a behavior to arise via two very different mechanisms without there being any efficient way to distinguish those mechanisms. That would be a fatal problem for mechanistic anomaly detection, but it would also probably mean that any approach to AI alignment will need to exploit some additional structure about sensor tampering in order to avoid it. This may take the form of clarifying the intuition that problematic AI systems surely “know” they are tampering with sensors; it may also be a serious obstacle to solving alignment under a clearly-articulated set of assumptions rather than relying on messy empirical contingencies.
304650e0-11e3-48b4-888a-06ba8cfb3ec8
StampyAI/alignment-research-dataset/arxiv
Arxiv
Evaluating Agents without Rewards 1 Introduction --------------- Reinforcementlearning(RL)hasenabledagentstosolvecomplextasksdirectlyfromhigh-dimensionalimageinputs,suchaslocomotion(heess2017parkour),roboticmanipulation(akkaya2019rubiks),andgameplaying(mnih2015dqn, silver2017alphago).However,manyofthesesuccessesarebuiltuponrichsupervisionintheformofmanuallydefinedrewardfunctions.Unfortunately,designinginformativerewardfunctionsisoftenexpensive,time-consuming,andpronetohumanerror(krakovna2020specification).Furthermore,thesedifficultiesincreasewiththecomplexityofthetaskofinterest.IncontrasttomanyRLagents,naturalagentsfrequentlylearnwithoutexternallyprovidedtasks,throughintrinsicobjectives.Forexample,childrenexploretheworldbycrawlingaroundandplayingwithobjectstheyfind.Inspiredbythis,thefieldofintrinsicmotivation(schmidhuber1991curiousmodel, oudeyer2007curiosity)seeksmathematicalobjectivesforRLagentsthatdonotdependonaspecifictaskandcanbeapplicabletoanyunknownenvironment.Westudythreecommontypesofintrinsicmotivation: | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | Objective | | | | --- | | Reward | | Correlation | | | | | --- | | HumanSimilarity | | Correlation | | | TaskReward | 1.00 | 0.67 | | HumanSimilarity | 0.67 | 1.00 | | InputEntropy | 0.54 | 0.89 | | InformationGain | 0.49 | 0.79 | | Empowerment | 0.41 | 0.66 | Table 1: Pearsoncorrelationcoefficientsbetweeneachintrinsicobjectiveandtaskrewardorhumansimilarity.Thethreeintrinsicobjectivescorrelatemorestronglywithhumansimilaritythanwithtaskreward.ThissuggeststhattypicalRLtasksmaynotbeasufficientproxyforthebehaviorthatisseeninhumansplayingthesamegames. ![](https://media.arxiv-vanity.com/render-output/6598860/x1.png) Figure 1: Trainingagentstoevaluatedifferentintrinsicobjectivescanbeaslowandexpensiveprocess.Toaddressthisproblem,wecollectadiversedatasetofdifferentenvironmentsandbehaviorsonceandretrospectivelycomputeagentobjectivesfromit.Wethenanalyzethecorrelationsbetweenintrinsicobjectivesandsupervisedobjectives,suchastaskrewardandhumansimilarity.Thisspeedsuptheiterationtimebylettingusdrawconclusionsontherelationshipsbetweendifferentintrinsicobjectiveswithouthavingtotrainanewagentforeachofthem. | | | | | | --- | --- | --- | --- | | Breakout | Seaquest | Montezuma | Minecraft | | | | | | | | | | | | | | | | Figure 2: Tomakecomputingtheagentobjectivestractableandefficient,wepreprocesstheimagesinourcollecteddatasetsbydiscretizingthemintobuckets.SimilartoGo-Explore(ecoffet2019goexplore),weresizetheimagesto8×8pixelsanddiscretizeeachoftheresultingcellstooneof4values.Theexamplesshowthatthisprocedurepreservespositionsofobjectsinthegame,suchastheplayer,ball,fish,andskull.Weenumeratethediscretizedimagestorepresenteachuniqueframebyanintegerindextocomputediscreteprobabilitytensorsfortheenvironments. * Inputentropyencouragesencounteringraresensoryinputs,measuredbyalearneddensitymodel(schmidhuber1990diffmodel, bellemare2016pseudocount, pathak2017icm, burda2018rnd). * Informationgainrewardstheagentfordiscoveringtherulesofitsenvironment(lindley1956expectedinfo, houthooft2016vime, shyam2018max, sekar2020plan2explore). * Empowermentrewardstheagentformaximizingtheinfluenceithasoveritssensoryinputsorenvironment(klyubin2005empowerment, mohamed2015empowerment, karl2017empowerment). Despitetheempiricalsuccessofintrinsicmotivationforfacilitatingexploration(bellemare2016pseudocount, burda2018rnd),itremainsunclearwhichfamilyofintrinsicobjectivesisbestforagivenscenario,forexamplewhentaskrewardsaresparseorunavailable,orwhenthegoalistobehavesimilarlytohumanplayers.Designingintrinsicobjectivesthatresultinintelligentbehavioracrossdifferentenvironmentsisanimportantunsolvedproblem.Moreover,itisnotclearwhetherdifferentintrinsicobjectivesoffersimilarbenefitsinpracticeorareorthogonalandshouldbecombined.Progresstowardansweringthesequestionsishinderedbyslowiterationspeedsbecauseforeachnewintrinsicobjective,onetypicallyneedstodesignandtrainanewagentinoneormoreenvironments.Weproposethemethodologyofevaluatingandcomparingintrinsicobjectivesbycorrelationanalysisonafixeddataset,asshownin[Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ EvaluatingAgentswithoutRewards").Thisfreesusfromhavingtotrainaseparateagentforeveryobjectiveconsidered,andalleviatesthecomplexityassociatedwithoptimizingobjectivesonline.Toimplementthismethodology,wecollectadiversedatasetof26agentsinfourcomplexenvironmentstocomparetaskreward,similaritytohumanplayers,aswellasthreerepresentativeintrinsicobjectives.Thekeyfindingsoftheexperimentalstudyaresummarizedasfollows: * Inputentropyandinformationgaincorrelatemorestronglywithhumansimilaritythantaskrewarddoes.Theintrinsicobjectivesalsocorrelatestronglywithhumansimilarityacrossallstudiedenvironments,whilerewarddoesnot.Todevelopagentsthatbehavesimilarlytohumanplayers,intrinsicobjectivesmaythusbemorerelevantthantypicaltaskrewards. * Simpleimplementationsofinputentropy,informationgain,andempowermentbasedonimagediscretizationcorrelatestronglywithhumansimilarity.Thus,theycouldleadtoeffectiveexplorationwhenoptimizedonlineandcouldserveasevaluationmetricswhentaskrewardsanddemonstrationsareunavailable. * Inputentropyandinformationgaincorrelatestronglywitheachother,buttoalesserdegreewithempowerment.Thissuggeststhatoptimizingempowermenttogetherwitheitherofthetwootherobjectivescouldbebeneficialfordesigningexplorationmethods. 2 Background ------------- Tovalidatetheeffectivenessofourintrinsicobjectivesacrossawidespectrumofagentbehavior,weretrospectivelycomputedourobjectivesonthelifetimeexperienceofwell-knownRLagents.Thus,wefirstcollecteddatasetsofavarietyofagentbehavioronwhichtocomputeandcompareourobjectives. #### Environments WechosethreedifferentAtarienvironmentsprovidedbyArcadeLearningEnvironment(bellemare2013ale):Breakout,Seaquest,andMontezuma’sRevenge,andadditionallytheMinecraftTreechopenvironmentprovidedbyMineRL(guss2019minerldata).BreakoutandSeaquestarerelativelysimplereactiveenvironments,whileMontezumaisachallengingplatformerrequiringlong-termplanning.Treechopisa3Denvironmentinwhichtheagentreceivesrewardforbreakingandcollectingwoodblocks,buthasconsiderablefreedomtoexploretheworld.Asdetailedin[Appendix F](#A6 "Appendix F Environments ‣ EvaluatingAgentswithoutRewards"),wechosethesefourenvironmentsbecausetheyspanarangeofcomplexity,freedom,anddifficulty. #### Agents The7agentsrepresentedinourdatasetinclude3learningalgorithmsand2trivialagentsforcomparison.WeselectedRLagentsspanningtherangefromextrinsictaskrewardtointrinsicmotivationreward.Additionally,weincludedrandomandno-opagents,twooppositeextremesofnaivebehavior.Ourgoalwastorepresentawiderangeofbehaviors:playingtoachieveahighscore,playingtoexploretheenvironment,andtakingactionswithoutregardtotheenvironment.Specifically,weusedthePPOagent(schulman2017ppo)trainedtooptimizetaskreward,andtheRND(burda2018rnd)andICM(pathak2017icm)explorationagentsthatusePPOforpolicyoptimization.Wetraintheexplorationagentsonceusingonlytheintrinsicrewardandonceusingbothintrinsicandtaskrewards.Theagentsaredescribedfurtherin[Appendix G](#A7 "Appendix G Agents ‣ EvaluatingAgentswithoutRewards"). 3 Method --------- Tospurprogresstowardbetterunderstandingofintrinsicobjectives,weempiricallycomparedthethreeobjectivefamiliesintermsoftheircorrelationwithhumanbehaviorandwiththetaskrewardsofthreeAtarigamesandMinecraft.First,wetrainedseveralwell-knownRLagentsonthreeAtarigamesandMinecraftandstoretheirlifetimedatasetsofexperience.Specifically,100millionframesoneachofthethreeAtarienvironmentswitheachofsevenagents:random,no-op,PPO,andRNDandICMwithandwithouttaskreward.Minecraftwasevaluatedfor12millionframesperagentbecausethesimulationisslowerthantheAtarigames,andfiveagentsratherthansevenwereused,excludingbothconfigurationsofICM.Thisresultedinatotalof2.1billiontimestepsandabout9terabytesofagentexperience.Wepreprocessedthecollectedexperiencedatasetsandcomputedhumansimilarity,inputentropy,empowerment,andinformationgainusingsimpleestimatorswithclearlystatedassumptions,inaggregateovereachagentlifetime,yieldingonenumberperobjective-agent-environment.Atableofallcomputedvaluesisincludedin[Table A.1](#A1.T1 "Table A.1 ‣ Appendix A Objectives ‣ EvaluatingAgentswithoutRewards").Wethenanalyzedthecorrelationsbetweentheintrinsicobjectivestounderstandhowtheyrelatetoanotherandhowwelltheyreflecttaskrewardandhumansimilarity.Wenowdescribethepreprocessingandintroduceourestimatorsfortheobjectives. ### 3.1 Preprocessing Tomakethecomputationoftheconsideredobjectivestractableandefficient,wediscretizetheagent’sinputimagessothattheycanberepresentedbybucketindices.Thisallowsustosummarizeeachcollecteddatasetasasparsetensorthatholdsthecountsofeachpossibletransitiontuple,fromwhichwethencomputetheobjectivevalues. #### Discretization Asshownin[Figure 2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ EvaluatingAgentswithoutRewards"),wepreprocesstheimagesbyfirstconvertingthemfromRGBtograyscaleastheywereseenbytheagents.Afterthat,webilinearlyresizethemto8×8pixels.Wediscretizetheselow-resolutionimagestofourpossiblevaluesperpixel,withthresholdschosenasthebrightnesspercentiles25,50,and75acrossalluniquevaluesofthecorrespondingpixelintheenvironmentacrossallagents.Wealsoconsideredchoosingthethresholdsbasedoneachagentindividually,asdiscussedin[Appendix D](#A4 "Appendix D InformationGainVariants ‣ EvaluatingAgentswithoutRewards").Theuniquediscretizedimagesareenumeratedtorepresenteachimagebyanintegerindex. #### Aggregation Foreachpairingofagentandenvironment,wesummarizethetransitionspreprocessedimagesandactionsintoatensorofcounts.Foranimageindex1≤i≤|X|,andactionindex1≤j≤|A|,andasuccessorimageindex1≤k≤|X|,whereXisthesetofinputsandAthesetofactions,thecounttensorNijkisdefinedasthenumberoftransitionsfromimagebucketiandactionjtoimagebucketk.NormalizingthecounttensorNyieldsaprobabilitytensorPthatstorestheprobabilityofeachtransitionintheagent’sdataset.UndertheassumptionofaMarkovianenvironmentandagent,theprobabilitytensorfullydescribesthestatisticsofthepreprocesseddataset, | | | | | | --- | --- | --- | --- | | | P≐N/∑ijkNijk. | | (1) | TheprobabilitytensorPdescribesthejointprobabilityoftransitionsforeachagentandenvironmentandthusallowsustocomputeanymarginalsandconditionalsneededforcomputingtheobjectives. ### 3.2 Objectives Wecomparetwosupervisedobjectives,taskrewardandhumansimilarity,aswellasthreeintrinsicobjectives:inputentropy,informationgain,andempowerment.Wecomputeasinglevalueofeachoftheseobjectivesoneachagent-environmentdataset. #### Taskreward TherewardprovidedbyRLenvironmentsmeasuressuccessataspecifictask.Theenvironmentsweusehaveonlyonepredefinedtaskeach,despitethewiderangeofconceivableobjectives,especiallyinMontezuma’sRevengeandMinecraft.ThisistrueofmanyRLenvironments,andlimitsone’sabilitytoanalyzethebehaviorofanagentinageneralsense.Multi-taskbenchmarksaddressthisproblembutoftenincludeadistinctenvironmentforeachtaskratherthanmultipletaskswithinthesameenvironment(yu2019metaworld).Thiswouldmakeitdifficulttoevaluatetheagent’sabilitytogloballyexploreitsenvironmentindependentofthetask. #### Humansimilarity Taskrewardcapturesonlytheagent’ssuccessatthespecifictaskdefinedviatherewardfunction.Thismaynotmatchupwiththebehaviorofahumanplayerwhointeractswithanenvironment.Tocapturethishuman-likeaspectofbehavior,wecomputethesimilaritybetweentheagent’sbehaviorandhumanbehaviorinthesameenvironment,thatis,usinghumanbehaviorasa“groundtruth.”InspiredbytheinverseRLliterature(ziebart2008maximum, klein2012inverse),wemeasuretheoverlapbetweenhumanandagentobservations.WeleveragetheAtari-HEAD(zhang2019atarihead)andtheMineRLTreechop(guss2019minerldata)datasetsandpreprocessthemthesamewayastheagentdatasets.TheAtarihumandatasetscontain∼250Kframesperenvironment,and∼460KframesareavailableforMinecraft.WecomputehumansimilarityastheJaccardindex,alsoknownasintersectionoverunion,betweentheuniqueinputimagesencounteredinthehumandatasetandthoseencounteredbytheartificialagent.Forthis,wefirstcomputethemarginalinputprobabilitiesfromtheprobabilitytensorsP\,agentandP\,humanoftheartificialagentandthehumanplayer,respectively.Thehumansimilarityisthencomputedasafractionofnon-zeroprobabilityentries,where1(⋅)denotestheindicatorfunctionthatevaluatesto1fortrueinputsandto0forfalseinputs, | | | | | | --- | --- | --- | --- | | | S≐∑i1(Ai>0∧Bi>0)/∑i1(Ai>0∨Bi>0)Ai≐∑jkP\,agentijk,Bi≐∑jkP\,humanijk. | | (2) | Notethatwhileweuseinputimagesfromrecordedhumanbehaviortocomputehumansimilarity,wedonotcomparethehumanandagentactionsdirectly,astheRLagentsplayinanenvironmentwithstickyactions,whilethehumansdidnot.Collectinghumandatasetsischallenginginenvironmentsthatarechallengingforhumanplayers,suchashigh-dimensionalcontinuouscontrol.Thus,weconsiderthreeintrinsicobjectives,whichdonotrequireenvironment-specificengineeringorhumandemonstrators. | | | | --- | --- | | | | Figure 3: Objectivevaluesforagentsthatusetaskreward(top)andagentswithoutaccesstotaskreward(bottom).Thesupervisedobjectivesaretaskreward(R)andhumansimilarity(S)andtheintrinsicobjectivesareinputentropy(C),empowerment(E),andinformationgain(I).Theno-opagentachievesthelowestscoresinallobjectivesdoesnotshowupinthenormalizedcoordinates.ThetwoexplorationagentswithaccesstotaskrewardsachievethehighesttaskrewardandhumansimilarityacrossAtarienvironments,andRNDwithoutrewardinMinecraft.ICMorRNDeachachievethehighestinputentropyandinformationgainvalueinthreeoutoffourenvironmentsaccordingtoourobjectives.Surprisingly,PPOandtask-agnosticICMachievehighempowerment,eveninMontezumawheretheyachievelowtaskreward. #### Inputentropy Theagent’sinputentropyintheenvironmentmeasureshowspreadoutitsvisitationdistributionoverinputsis.Theinputentropytendstobelargerthemoreinputstheagenthasreached,andthemoreuniformlyitvisitsthem.Inotherwords,inputentropymeasureshowimprobableindividualinputsareundertheinputdistributionoftheagent.ThisideahasbeenusedforexplorationinRL,wherethenegativelog-probabilityofinputsunderalearneddensitymodelisusedasexplorationbonus(schmidhuber1991curiousmodel, oudeyer2007curiosity, bellemare2016cts, burda2018rnd).Becausewecomputetheinputentropyretrospectively,wehaveaccesstotheagent’sinputdistributionviatheprobabilitytensorthatsummarizestheagent’slifetimeexperience.Wecomputethetheentropyoverinputsxbymarginalizingoutactionsandsuccessinputs, | | | | | | --- | --- | --- | --- | | | C≐H[x]=−∑iAilnAi,Ai≐∑jkPijk. | | (3) | #### Empowerment Empowermentmeasurestheagent’sinfluenceoveritssensoryinputsandthustheenvironment(klyubin2005empowerment).IthasbeenappliedtoRLbymohamed2015empowerment, salge2013continuousempow, karl2017empowerment, leibfried2019empowreward, zhao2020empow.Multiplevariantsexistintheliterature,includingpotentialempowermentwheretheagentaimsto“havemanyoptions”andrealizedempowermentwheretheagentaimsto“usemanyoptions.”Weconsiderthelattervariant,thatismeasuredasmeasuredasthemutualinformationbetweentheagentsactionsandresultinginputs,givencurrentinputs(salge2014realizedempow, hafner2020apd).Wecomputetheone-stepempowermentasthedifferencebetweentheentropyofactionagiventheprecedinginputx,beforeandafterobservingthefollowinginputx′, | | | | | | --- | --- | --- | --- | | | | | (4) | #### Informationgain Informationgainmeasureshowmanybitsofinformationtheagenthaslearnedabouttheenvironmentfromitsdatasetofexperience(lindley1956expectedinfo).Itisthemutualinformationbetweenobservationsandtheagent’srepresentationoftheenvironment.InformationgainhasledtosuccessfulexplorationinRL(sun2011plansurprise, houthooft2016vime, shyam2018max, mirchev2018dvbflm, sekar2020plan2explore).Tomeasuretheamountofinformationgained,weneedawaytochoosearepresentationoftheenvironmentthatsummarizestheagent’sknowledge.Preprocessingtheinputsintodiscreteclassesenablesustorepresenttheknowledgeasabeliefoverthetruetransitionmatrix.Thetotalinformationgainofoveragent’slifetimeistheentropydifferenceofthepriorbeliefoverthetransitionmatrixandtheposteriorbelief.WechoosethebeliefstobeDirichletdistributionsoversuccessorinputsx′foreverypairingofcurrentinputsxandactionsa(sun2011plansurprise, friston2017actinfparaminfogain), | | | | | | --- | --- | --- | --- | | | I≐I[z;d]=H[z]−H[z∣∣d]=∑ijH[Dir(1)]−∑ijH[Dir(1+Nij)]. | | (5) | ![](https://media.arxiv-vanity.com/render-output/6598860/x4.png) Figure 4: Pearsoncorrelationcoefficientsbetweenthethreeconsideredintrinsicobjectivesandtaskreward(left)andsimilaritytohumanplayers(right).AllconsideredintrinsicobjectivescorrelatemorestronglywithhumansimilaritythanwiththetaskrewardsintheRLenvironments.Thissuggeststhattypicaltaskrewardsmaybeinsufficientforevaluatingexplorationobjectiveswhenthegoalistoproducebehaviorsimilartohumanplayers.Interestingly,rankingtheintrinsicobjectivesbytheircorrelationswitheithertaskrewardorhumansimilaritygivesthesameordering.ThecorrelationswerecomputedfromN=26agentenvironmentpairingsandarestatisticallysignificantwithp<0.05. TheDirichletdistributionisparameterizedbyavectorofcounts,knownasconcentrationparameterα.ThereisoneDirichletdistributionforeachpairingofcurrentinputxandactiona,withaconcentrationparameterofthelengthofpossiblesuccessorinputsx′.WechooseauniformDirichletdistributionaspriorbelief,correspondingtoavectorofonesforitsconcentrationparameter.TheposteriorbeliefistypicallyaDirichletbeliefthatusesasconcentrationparameterthepriorvalueof1plusthecountvectorNij.TheentropyofaDirichletdistributioniscomputedfromthevectorofconcentrationparametersαusingthedigammafunctionψ(⋅)theincompletebetafunctionB(⋅)(lin2016dirichlet), | | | | | | --- | --- | --- | --- | | | H[Dir(α)]=lnB(α)−∑k(αk−1)ψ(αk)+(∑kαk−|X|)ψ(∑kαk). | | (6) | Inpractice,wefinditmoreeffectivetoonlyconsideruniquetransitions,andthusweuse1+sign(Nij)astheposteriorconcentrationinsteadofthepriorplusrawcounts1+Nij.Weconjecturethatchoiceleadstoamoremeaningfulinformationgainobjectivebecausethefourenvironmentsareclosetodeterministic,withover80%ofinputactionpairsleadingtoauniquesuccessorinputinthedatasets.Thus,experiencingatransitiononceestablishesthatthetransitionispossible,andrevisitingitmultipletimesshouldnotfurtherincreasetheagent’sknowledge.Wecomparealternativestothischoicein[Appendix D](#A4 "Appendix D InformationGainVariants ‣ EvaluatingAgentswithoutRewards"). 4 Analysis ----------- Weconductacorrelationanalysistounderstandhowthethreeintrinsicobjectivesrelatetoanotherandtothesupervisedobjectives,taskrewardandhumansimilarity.First,wecomputeallobjectivesfortheagentsandcomparetheagentsbythem.Second,weranktheintrinsicobjectivesbytheircorrelationstotaskrewardandhumansimilarity.Third,wecomparethecorrelationsamongtheintrinsicobjectives.Wemakethesourcecodeforreplicatingouranalysesandthecollecteddatasetsavailabletosupportfutureworkonevaluatingagentswithouttaskrewards.111<https://danijar.com/agenteval> ### 4.1 EvaluationofAgents Webeginbycomputingandanalyzingtheobjectivevaluesforindividualagents,withthegoalofintuitivelyunderstandingwhatspecificagentbehaviorleadstohighandlowvaluesofeachobjective.RNDandICMhavethehighestvaluesoftaskrewardandhumansimilarityinallenvironments;thattheagentwithhighestinputentropyandisalsotheagentwithhighestinformationgaininallenvironments;andthatempowermentvarieswidelybyenvironment.Theobjectivevaluesarevisualizedin[Figure 3](#S3.F3 "Figure 3 ‣ Humansimilarity ‣ 3.2 Objectives ‣ 3 Method ‣ EvaluatingAgentswithoutRewards")andtheirnumbersaregivenin[Appendix A](#A1 "Appendix A Objectives ‣ EvaluatingAgentswithoutRewards"). #### Taskreward Thetotalepisoderewardsofthetask-specificRNDandICMagentswetrainedaresimilartothosereportedbytaiga2020bonus.ICMachieveslightlyhigherrewardthanRNDinSeaquest,whiletheoppositeistrueintaiga2020bonus.Weobservein[Appendix A](#A1 "Appendix A Objectives ‣ EvaluatingAgentswithoutRewards")thattask-specificRNDandICMachievethehighesttaskrewardpertimestepintheenvironmentsexceptinMinecraft,wheretask-agnosticRNDperformsbest,showingthatexplicitexplorationobjectivesarebeneficialinallourconsideredenvironments. #### Humansimilarity Humansimilarityisthehighestfortask-specificICMandRNDinSeaquestandMontezumarespectively,butfortask-agnosticICMinBreakoutandtask-agnosticRNDinMinecraft.Explorationagentsachievethehighesthumansimilarityinallfourenvironments.InenvironmentsotherthanMinecraft,therandomagenthassubstantiallylowerhumansimilaritythantheotheragents,andno-opisconsistentlythelowestacrossallenvironments. ![](https://media.arxiv-vanity.com/render-output/6598860/x5.png) Figure 5: Correlationmatricesfortheobjectivesinourstudy:taskreward(R),humansimilarity(S),inputentropy(C),empowerment(E),informationgain(I).Thecorrelationistakenacrossagents.Allobjectivescorrelatepositivelybuttodifferentdegrees.Whilehumansimilarityandthethreeintrinsicobjectivesallcorrelatestronglywithanother,taskrewardonlycorrelatesweaklywiththemonSeaquestandMontezuma.Asper-environmentcorrelationsarecomputedfrom7agentsinAtarienvironmentsand5inMinecraft,theyarenotstatisticallysignificantandprovidedonlyforexploratoryanalysis.Thenumericalvaluesforthecorrelationmatricesareincludedin[Table B.1](#A2.T1 "Table B.1 ‣ Appendix B Correlations ‣ EvaluatingAgentswithoutRewards"). #### Inputentropy Task-agnosticICMandRNDobtainthehighestinputentropyinallenvironments,exceptinMinecraftwheretherandomagentachievesthehighestinputentropy.Thissuggeststhatusingtaskreward“distracts”thetask-specificagentsfrommaximizinginputentropy.Theno-opagenthasthelowestinputentropyinallenvironments.TherandomagentachieveshighinputentropyinMinecraft,wheremanydistinctinputsareeasytoreachfromtheinitialstate,e.g.bymovingthecamera. #### Informationgain InformationgainishighestinBreakoutforICMwithreward,inSeaquestandMontezumaforRNDwithoutreward,andinMinecraftfortherandomagent.WeconjecturethattherandomagentwouldachievelowerinformationgaininMinecraftunderapreprocessingschemethatgroupssemanticallysimilarobservationsintothesamebucket.Inallfourenvironments,theagentachievingthehighestinputentropyalsoachievesthehighestinformationgain,implyingthatinputentropyandinformationgainarecloselyrelatedinpractice. #### Empowerment Theagentsthathavethemostempowermentvarymoreacrossenvironmentsthantheydofortheotherobjectives.InSeaquest,wheremostmeaningfulactionsaretiedtoreward,thePPOagentachieveshighestempowerment.InMontezuma,wheremanyactionsinfluencetheinputinwaysunrelatedtothetask,task-agnosticICMachievesthehighestempowerment.InBreakout,wherealmostallactionsmovethepaddleandthusinfluencetheinput,therandomagentachievesthehighestempowerment. ### 4.2 EvaluationofIntrinsicObjectives Weevaluatethethreeintrinsicobjectivesbasedontheircorrelationswithtaskrewardandhumansimilarity.Thecorrelationsaggregatedacrossallenvironmentsareshownin[Figure 4](#S3.F4 "Figure 4 ‣ Informationgain ‣ 3.2 Objectives ‣ 3 Method ‣ EvaluatingAgentswithoutRewards")andtheirnumericvaluesareincludedin[Table B.1](#A2.T1 "Table B.1 ‣ Appendix B Correlations ‣ EvaluatingAgentswithoutRewards").Thisconstitutesourmainresult.Allintrinsicobjectivescorrelatemorestronglywithhumansimilaritythanwithtaskreward.Moreover,rankingthembytheircorrelationswitheithertaskrewardorhumansimilarityyieldsthesameorder.Finally,theintrinsicobjectivescorrelatepositivelywithtaskrewardandhumansimilaritywithinindividualenvironmentsinalmostallcases. #### Bytaskrewardandhumansimilarity Allintrinsicobjectivescorrelatesubstantiallymorewithhumansimilaritythanwithtaskreward.Thecorrelationswithhumansimilarityare:inputentropy(R=0.89),informationgain(R=0.79),andempowerment(R=0.66).Thecorrelationswithtaskrewardare:inputentropy(R=0.54),informationgain(R=0.49),andempowerment(R=0.41).Thissuggeststhatintrinsicobjectivesmaycapturemoreaspectsofhumanbehaviorthantypicaltaskrewards,andmaythusbethemorepromisingapproachwhenthegoalistodesignagentsthatbehavesimilarlytohumans. #### Rankingofintrinsicobjectives Rankingtheintrinsicobjectivesbytheircorrelationswitheithertaskrewardorhumansimilarityyieldsinthesameorder.Inputentropycorrelatesmoststronglywithtaskrewardandhumansimilarity,followedbyinformationgain,andthenempowerment.Thissuggeststhatexplorationobjectivessuchasinputentropyandinformationgainarebeneficialacrossthefourenvironments,whereastaskreward,whichisnotdirectlyencouragingexploration,doesnotseemasimportant.Moreover,ourresultsidentifyinputentropyasapromisingintrinsicobjectivethatshouldworkwellacrossmanyscenarios. #### Per-environmentcorrelations Weadditionallyincludeper-environmentcorrelationsin[Figure 5](#S4.F5 "Figure 5 ‣ Humansimilarity ‣ 4.1 EvaluationofAgents ‣ 4 Analysis ‣ EvaluatingAgentswithoutRewards").Notonlyaggregatedacrossenvironments,butalsowithineachenvironment,theintrinsicobjectivescorrelatepositivelywithbothtaskrewardandhumansimilarityin23outof24cases.Thecorrelationsbetweenintrinsicobjectivesandhumansimilarityarestronginallfourenvironments(0.56≤R≤0.96),exceptforempowermentinSeaquest(R=0.44).Incontrast,thecorrelationswithtaskrewardvarymoreacrossenvironments,beinglargerinBreakout(0.48≤R≤0.91)andMinecraft(0.59≤R≤0.98)butweakerinSeaquest(0.22≤R≤0.61)andMontezuma(0.00≤R≤0.19). ### 4.3 ComparisonamongObjectives Theright-mostcorrelationmatrixin[Figure 5](#S4.F5 "Figure 5 ‣ Humansimilarity ‣ 4.1 EvaluationofAgents ‣ 4 Analysis ‣ EvaluatingAgentswithoutRewards")showsthecorrelationsamongallfiveobjectives,whicharestatisticallysignificantwithp<0.05.Thenumericalvaluesareincludedin[Table B.1](#A2.T1 "Table B.1 ‣ Appendix B Correlations ‣ EvaluatingAgentswithoutRewards").Taskrewardandhumansimilaritycorrelatepositively(R=0.67)butthiscorrelationisweakerthanthosebetweenhumansimilarityandinputentropy(R=0.89)orinformationgain(R=0.79).Inputentropyandinformationgaincorrelatestrongly(R=0.95),suggestingthattheywouldhavesimilareffectswhenoptimizedonline.Incontrast,empowermentcorrelateslessstronglywiththetwoobjectives(R=0.66andR=0.55).Thissuggeststhatempowermentmeasuresadifferentcomponentofagentbehaviorandcombiningitwitheitherinputentropyorinformationgaincouldbebeneficialwhendesigningnovelexplorationmethods. 5 Discussion ------------- Inthispaper,wehavecollectedlargeanddiversedatasetsofagentbehavior,computedthreeintrinsicobjectivesoneachdataset,andanalyzedthecorrelationsoftheintrinsicobjectiveswitheachotherandwithtaskrewardandahumansimilarityobjective.Ourretrospectiveevaluationmethodologyenabledustocomparedifferentintrinsicobjectiveswhileavoidingthecomplexityandtypicallyslowiterationspeedthatcomewithonlinetraining. #### Keyfindings Allstudiedintrinsicobjectivescorrelatemorestronglywithhumansimilaritythanthetaskrewardsdo.Thesecorrelationsholdconsistentlyacrossallenvironments,whereastaskrewardandhumansimilaritycorrelatestronglyonlyinhalfoftheenvironments.Wethusrecommendintrinsicobjectivesovertaskrewardswhenthegoalistodesigngeneralagentsthatbehavesimilarlytohumanplayers.Furthermore,wefindthatinputentropyandinformationgainaresimilarobjectiveswhileempowermentmayoffercomplementarybenefits,andthusrecommendfutureworkoncombiningintrinsicobjectives. #### Futurework Toassigntheagentobservationstobuckets,wenaivelydownscaledthem,whichissimplebutdoesnotaccountforthesemanticsimilaritybetweenimages.Wesuggestlearningtherepresentationsusingdeepneuralnetworksasadirectionforfuturework.Alimitationofthehumansimilarityvaluesisthatthehumandatasetsarerelativelysmallanditisunclearwhatinstructionsthehumanplayersreceived,whichcouldaffecthowmuchtheplayersfocusonapparenttaskscomparedtoopen-endedbehavior.Accesstomorehumandataandcontrolovertheinstructionsgiventotheplayerswouldbehelpfulforfuturework.
29b0dc82-e920-44b6-8de6-527e42032ccf
trentmkelly/LessWrong-43k
LessWrong
Paid research assistant position focusing on artificial intelligence and existential risk Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email allan.dafoe@yale.edu with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.
01549de7-e94e-4ac6-a0c8-3ddd9c0fd89a
trentmkelly/LessWrong-43k
LessWrong
Social Science in its epistemological context Seeds of Science has kindly published my essay “Prediction and control in natural and social systems”. This short essay : i) address and contextualize the modest predictive success of Social Science and ii) describes what kind of scientific intervention on the human society can be successful.  Its main arguments are the following: Exact Sciences and Artificial Systems: Physics and Chemistry are considered exact because they operate on artificial systems developed through scientific processes, unlike complex natural systems. The precision seen in Celestial Mechanics (the only natural system on which we can consider Science to be exact) is replicated in other physical systems like lenses, batteries, and microchips, but they are human-made. We mostly have exact predictive science on systems designed to be predicted. Even chemists' first task was to create pure substances, which are essential for applying the rules of stoichiometry and chemical thermodynamics. Complex Natural Systems: Beyond exact sciences, other natural sciences study complex systems, both animate and inanimate. Complex systems can be categorized into modular systems, which allow controlled experiments (vg. Medicine random trials), and non-modular systems, where direct experimentation is not feasible (the system is too big, no easy control points are found).  Non-Modular Complex Systems: These systems, such as the atmosphere, ecosystems, and populations, are studied by Climatology, Ecology, Population Genetics, and Evolutionary Biology, and are not easily amenable to controlled experiments on due to their scale and complexity (that imply that it is difficult to alter a single element alone in the system). Social Sciences: When compared with natural sciences studying complex non-modular systems, social sciences' predictive success and techniques align with those of similar natural sciences, challenging the notion that social science is less rigorous. Institutional Design and Progress: increasing t
5f6ef737-53cd-4c83-9380-dd2aaa334b79
trentmkelly/LessWrong-43k
LessWrong
Heading off a near-term AGI arms race I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress. Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers. The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety: > ...I think this exactly the sort of innovation timeline real venture capitalists should be considering - funding real R&D that could have a revolutionary impact even if the odds are against it. > The company to get all of this right will be the first two trillion dollar company. Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem? I'll admit to being very scared.
6650f168-f21c-4063-99e9-d4eb5f1ecbd7
trentmkelly/LessWrong-43k
LessWrong
Informed consent bias in RCTs? The problem of published research findings not being reliable has been discussed here before.  One problem with RCTs that has received little attention is that, due to informed consent laws and ethical considerations, subjects are aware that they might be receiving sham therapy.  This differs from the environment outside of the research setting, where people are confident that whatever their doctor prescribes is what they will get from their pharmacist.  I can imagine many ways in which subjects' uncertainty about treatment assignment could affect outcomes (adherence is one possible mechanism).  I wrote a short paper about this, focusing out what we would ideally estimate if we could lie to subjects, versus what we actually can estimate in RCTs (link).   Here is the abstract: > It is widely recognized that traditional randomized controlled trials (RCTs) have limited generalizability due to the numerous ways in which conditions of RCTs differ from those experienced each day by patients and physicians. As a result, there has been a recent push towards pragmatic trials that better mimic real-world conditions. One way in which RCTs differ from normal everyday experience is that all patients in the trial have uncertainty about what treatment they were assigned. Outside of the RCT setting, if a patient is prescribed a drug then there is no reason for them to wonder if it is a placebo. Uncertainty about treatment assignment could affect both treatment and placebo response. We use a potential outcomes approach to define relevant causal effects based on combinations of treatment assignment and belief about treatment assignment. We show that traditional RCTs are designed to estimate a quantity that is typically not of primary interest. We propose a new study design that has the potential to provide information about a wider range of interesting causal effects Any thoughts on this?  Is this a trivial technical issue or something worth addressing?
9e60d06c-c491-40a6-a242-f7c8e3cd82f7
trentmkelly/LessWrong-43k
LessWrong
Dress Up For Secular Solstice I argue below that 1) fashion norms are good, even purely within the rationalist community and 2) Secular Solstice should have an unofficial attire norm: a slightly dressy black base layer with shiny accents (shiny jewelry, reflective material, metallic colors, etc; see examples in the main text).  Others have written general defenses of “dressing up” in day-to-day life. These defenses typically make the very safe argument that “how you dress conveys information about you to other people, and you should take that into account”. Fundamentally, these are arguments for rationalists to dress based on the perceptions of those outside of the rationalist community. I think it is far more interesting (and contentious) to consider how the rationalist community should approach dress internally. In settings where rationalists set the norms, what should those norms be?  One tempting response is that rationalists should not care about fashion. Why devote mental energy to your own presentation when it could be better spent on other things? And, with that in mind, why should you evaluate other’s presentations of themselves when you could get to know them or their ideas, and evaluate them on that basis alone? Perhaps the Bayesian framing of this is that clothing is either totally uninformative (at least conditioning on rationalism) or a signal of information you would rather not use (maybe that’s wealth, maybe that’s country of origin, whatever).[1] I think these positions are hard to defend. Absent any internal rationalist coordination, clothing is already informative. Many people wear shirts that explicitly tell you things about them, like where they went to college, where they work, what bands they like, what they think is funny, etc. And maybe you don’t want to use all of the available information, but why wouldn’t you want to use this information when deciding who to talk to, who to avoid, who to crack jokes with, who to be professional around, etc? So, I think even in rati
67d30aa3-d512-463a-9f45-145208bad2ca
trentmkelly/LessWrong-43k
LessWrong
What wiki-editing features would make you use the LessWrong wiki more? The LessWrong wiki does not seem as well used as it should be. I guess this is a lack of editors. @Vladimir_Nesov made a good point that many standard wiki-editing features are missing which makes the prospect unappealing. > The point is that absence of the feature makes engagement with the wiki less promising, as it becomes inconvenient and hence infeasible in practice to protect it in detail, and so less appealing to invest effort in it. I mentioned that as a hypothesis for explaining currently near-absent editor engagement So in the spirit of this, what features would cause you personally to edit a lot more than you currently do?  I may try and pay some devs to write the pull requests if they seem feasible.
24270a68-d238-4b9c-b0d4-8de5f88fde87
trentmkelly/LessWrong-43k
LessWrong
Immortality or death by AGI AKA My Most Likely Reason to Die Young is AI X-Risk TL;DR: I made a model which takes into account AI timelines, the probability of AI going wrong, and probabilities of dying from other causes. I got that the main “end states” for my life are either dying from AGI due to a lack of AI safety (at 35%), or surviving AGI and living to see aging solved (at 43%). Meta: I'm posting this under a pseudonym because many people I trust had a strong intuition that I shouldn't post under my real name, and I didn't feel like investing the energy to resolve the disagreement. I'd rather people didn't de-anonymize me. The model & results A simple model of AGI scenarios. All possible "end" states of the simulation are in color.  I made a simple probabilistic model of the future, which takes seriously the possibility of AGI being invented soon, its risks, and its effects on technological development (particularly in medicine): * Without AGI, people keep dying at historical rates (following US actuarial tables) * At some point, AGI is invented (following Metaculus timelines) * At the point AGI is invented, there are two scenarios (following my estimates of humanity’s odds of survival given AGI at any point in time, which are relatively pessimistic): 1. We survive AGI. 2. We don’t survive AGI. * If we survive AGI, there are two scenarios: 1. We never solve aging (maybe because aging is fundamentally unsolvable or we decide not to solve it). 2. AGI is used to solve aging. * If AGI is eventually used to solve aging, people keep dying at historical rates until that point. * I model the time between AGI and aging being solved as an exponential distribution with a mean time of 5 years. Using this model, I ran Monte Carlo simulations to predict the probability of the main end states of my life (as someone born in 2001 who lives in the US): * I die before AGI: 10% * I die from AGI: 35% * I survive AGI but die because we never solve aging: 11% * I survive AGI but
0e4f5ec0-9856-488a-add1-b3140d94abeb
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Book review: Human Compatible Book review: Human Compatible, by Stuart Russell. Human Compatible provides an analysis of the long-term risks from artificial intelligence, by someone with a good deal more of the relevant prestige than any prior author on this subject. What should I make of Russell? I skimmed his best-known book, [Artificial Intelligence: A Modern Approach](https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach), and got the impression that it taught a bunch of ideas that were popular among academics, but which weren't the focus of the people who were getting interesting AI results. So I guessed that people would be better off reading Deep Learning by Goodfellow, Bengio, and Courville instead. Human Compatible neither confirms nor dispels the impression that Russell is a bit too academic. However, I now see that he was one of the pioneers of [inverse reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning#Inverse_reinforcement_learning), which looks like a fairly significant advance that will likely become important someday (if it hasn't already). So I'm inclined to treat him as a moderately good authority on AI. The first half of the book is a somewhat historical view of AI, intended for readers who don't know much about AI. It's ok. ### Key proposals Russell focuses a moderate amount on criticizing what he calls the **standard model** of AI, in which someone creates an intelligent agent, and then feeds it a goal or utility function. I'm not too clear how standard that model is. It's not like there's a consensus of experts who are promoting it as the primary way to think of AI. It's more like people find the model to be a simple way to think about goals when they're being fairly abstract. Few people seem to be defending the standard model against Russell's criticism (and it's unclear whether Russell is claiming they are doing so). Most of the disagreements in this area are more about what questions we should be asking, rather than on how to answer the questions that Russell asks. Russell gives a fairly cautious overview of why AI might create risks that are as serious as the risks gorillas face from humans. Then he outlines an approach that might avoid those risks, using these three rules: > > 1. The machine's only objective is to maximize the realization of > human preferences. > 2. The machine is initially uncertain about what those > preferences are. > 3. The ultimate source of information about human preferences is > human behavior. > > > Note that these are high-level guidelines for researchers; he's not at all claiming they're rules that are ready to be written into an AI. Russell complains that the AI community has ignored the possibility of creating AIs that are uncertain about their objective, and calls that "a huge blind spot". I'm unclear on whether this qualifies as a blind spot. I can imagine a future in which it's important. But for AI as it exists today, it looks like uncertainty would add complexity, without producing any clear benefit. So I think it has been appropriate for most AI researchers to have postponed analyzing it so far. An aside: Russell points out that uncertainty provides an interesting way to avoid wireheading: if the reward is defined so that it can't be observed directly, then the AI will know that hacking the AI's signal won't create more brownie points in heaven. ### Feasibility? Russell is fairly convincing in his claim that AIs which are designed according to his rules will relatively safe. That's a much better achievement than most authors manage on this topic. I'm a bit less convinced that this approach is easy enough to implement that it will be competitive with other, possibly less safe, approaches. Some of my doubt derives from the difficulty, using current techniques, of encoding the relevant kind of abstract objectives into an AI. The objectives that Russell wants don't look much like the kind of objectives that AI researchers know how to put into an AI. It's fairly well known how to give an AI objectives either by using a large number of concrete examples of the "correct" result, or by specifying a readily quantifiable reward. Even a dilettante such as myself knows the basics of how to go about either of those approaches. In contrast, it's unclear how to encode an objective that depends on high-level concepts such as "human" or "preference that is inferred from behavior" without the AI already having done large amounts of learning. Maybe there's some way to use predictions about observed preferences as if the predictions quantified the actual objective? That looks partly right. But how do we tell the AI that the predictions aren't the real objective? If we don't succeed at that, we risk something like the King Midas problem: a naive new AI might predict that King Midas's preferences will be better satisfied if everything he touches turns to gold. But if that prediction becomes the AI's objective, then the AI will resist learning that the King regrets his new ability, since that might interfere with it's objective of turning anything he touches into gold. AI researchers have likely not yet tried to teach their systems about hard-to-observe concepts such as utopia, or heaven. Teaching an AI to value not-yet-observed preferences seems hard in roughly the same way. It seems to require using a much more sophisticated language than is currently used to encode objectives. I'll guess that someone would need to hard code many guesses about what human preferences are, to have somewhere to start, otherwise it's unclear how the AI would initially prefer any action over another. How is it possible to do that without the system already having learned a lot about the world? And how is it possible for the AI to start learning without already having some sort of (possibly implicit) objective? Is there some way to start a system with a much easier objective than maximizing human preferences, then switch to Russell's proposed objective after the system understands concepts such as "human" and "preference"? How hard is it to identify the right time to do that? I gather that some smart people believe some of these questions need to be tackled head on. My impression is that most of those people think AI safety is a really hard problem. I'm unclear on how hard Russell thinks AI safety is. It's quite possible that there are simple ways to implement Russell's rules, but I'm moderately confident that doing so would require a fairly large detour from what looks like the default path to human-level AI. Compare Russell's approach to [Drexler's ideas](http://www.bayesianinvestor.com/blog/index.php/2019/01/30/drexler-on-ai-risk/) of only putting narrow, short-term goals into any one system. (I think Drexler's writings were circulating somewhat widely before Russell finished writing Human Compatible, but maybe Russell finished his book before he could get access to Drexler's writings). If Drexler's approach is a good way to generate human-level AI, then I expect it to be implemented sooner than Russell's approach will be implemented. Still, we're still at a stage where generating more approaches to AI safety seems more valuable than deciding which one is best. Odds are that the researchers who actually implement the first human-level AIs will have better insights than we do into which approaches are most feasible. So I want to encourage more books of this general nature. Russell's rules show enough promise to be worth a fair amount of research, but I'm guessing they only have something like a 5% or 10% chance of being a good solution to AI risks. ### Miscellaneous Russell ideas often sound closer to those of [Bostrom](http://www.bayesianinvestor.com/blog/index.php/2014/07/28/superintelligence/) and [MIRI](https://intelligence.org) than to those of mainstream AI, yet he dismisses recursive self-improvement and fast takeoff. His reasons sound suspicious - I can't tell whether he's got good intuitions that he has failed to explain, or whether he ignores those scenarios because they're insufficiently mainstream. Russell makes the strange claim that, because existing AI is poor at generalizing across domains, > > when people talk about "machine IQ" increasing rapidly and threatening > to exceed human IQ, they are talking nonsense. > > > But Russell seems to take the opposite position 100 pages later, when he's dismissing Kevin Kelly's [The Myth of a Superhuman AI](https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/). I'm disappointed that Russell didn't cite the [satire](http://aleph.se/andart2/technology/why-fears-of-supersizing-are-misplaced/) of Kelly that argues against the feasibility of bigger than human machines. Russell has a strange response to Bostrom's proposal to use one good AI to defend against any undesirable AIs. Russell says that we'd end up "huddling in bunkers" due to the "titanic forces" involved in battles between AIs. Yet Bostrom's position is clearly dependent on the assumption of a large power asymmetry between the dominant AI (or possibly a dominant coalition of AIs?) and any new bad AI - why would there be much of a battle? I'd expect something more like [Stuxnet](https://en.wikipedia.org/wiki/Stuxnet). There are lots of opinions about how much power disparity there will be between the most powerful AI and a typical new AI, and no obvious way to predict which one is correct. Russell says little about this issue. But suppose such battles are a big problem. How is this concern specific to Bostrom's vision? If battles between AI are dangerous to bystanders, what's the alternative to good AI(s) fighting bad AIs? Does someone have a plan to guarantee that nobody ever creates a bad AI? Russell shows no sign of having such a plan. Russell might be correct here, but if so, the issue deserves more analysis than Russell's dismissal suggests. ### Philosophy Russell concludes with a philosophical section that tackles issues relating to morality. It includes some good thoughts about the difficulties of inferring preferences, and some rather ordinary ideas about utilitarianism, including some standard worries about [the repugnant conclusion](http://www.bayesianinvestor.com/blog/index.php/2009/09/25/turning-the-repugnant-conclusion-into-utopia/). Here's one of Russell's stranger claims: > > in a sense, *all* humans are [utility > monsters](https://en.wikipedia.org/wiki/Utility_monster) relative to, > say, rats and bacteria, which is why we pay little attention to the > preferences of rats and bacteria in setting public policy. > > > Is that why we ignore their preferences? My intuition says it's mostly because we're selfish and not trying to cooperate with them. I don't think I'm paying enough attention to their preferences to have figured out whether we're utility monsters compared to them. ### Conclusion I'll end with a more hopeful note (taken from right after Russell emphasizes that machines won't imitate the behavior of people they observe): > > It's possible, in fact, that if we humans find ourselves in the > unfamiliar situation of dealing with purely altruistic entities on a > daily basis, we may learn to be better people ourselves - more > altruistic and less driven by pride and envy. > > > Human Compatible will be somewhat effective at increasing the diversity of AI safety research, while heading off risks that AI debate will polarize into two tribes. See also [this review](https://www.lesswrong.com/posts/FuGDYNvA6qh4qyFah/thoughts-on-human-compatible) from someone who, unlike me, is doing real AI safety research.
e7740144-e435-434d-a924-caa8e1408095
StampyAI/alignment-research-dataset/arxiv
Arxiv
Evaluating Large Language Models Trained on Code 1 Introduction --------------- Scalable sequence prediction models (Graves, [2014](#bib.bib135 "Generating sequences with recurrent neural networks"); Vaswani et al., [2017](#bib.bib130 "Attention is all you need"); Child et al., [2019](#bib.bib131 "Generating long sequences with sparse transformers")) have become a general-purpose method for generation and representation learning in many domains, including natural language processing (Mikolov et al., [2013](#bib.bib119 "Distributed representations of words and phrases and their compositionality"); Sutskever et al., [2014](#bib.bib120 "Sequence to sequence learning with neural networks"); Dai and Le, [2015](#bib.bib106 "Semi-supervised sequence learning"); Peters et al., [2018](#bib.bib108 "Deep contextualized word representations"); Radford et al., [2018](#bib.bib111 "Improving language understanding by generative pre-training"); Devlin et al., [2018](#bib.bib27 "Bert: pre-training of deep bidirectional transformers for language understanding")), computer vision (Van Oord et al., [2016](#bib.bib117 "Pixel recurrent neural networks"); Menick and Kalchbrenner, [2018](#bib.bib134 "Generating high fidelity images with subscale pixel networks and multidimensional upscaling"); Chen et al., [2020](#bib.bib104 "Generative pretraining from pixels"); Bao et al., [2021](#bib.bib110 "BEiT: bert pre-training of image transformers")), audio and speech processing (Oord et al., [2016](#bib.bib116 "Wavenet: a generative model for raw audio"), [2018](#bib.bib113 "Representation learning with contrastive predictive coding"); Dhariwal et al., [2020](#bib.bib122 "Jukebox: a generative model for music"); Baevski et al., [2020](#bib.bib105 "Wav2vec 2.0: a framework for self-supervised learning of speech representations")), biology (Alley et al., [2019](#bib.bib102 "Unified rational protein engineering with sequence-based deep representation learning"); Rives et al., [2021](#bib.bib103 "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences")), and even across multiple modalities (Das et al., [2017](#bib.bib121 "Visual dialog"); Lu et al., [2019](#bib.bib115 "Vilbert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks"); Ramesh et al., [2021](#bib.bib93 "Zero-shot text-to-image generation"); Zellers et al., [2021](#bib.bib114 "MERLOT: multimodal neural script knowledge models")). More recently, language models have also fueled progress towards the longstanding challenge of program synthesis (Simon, [1963](#bib.bib101 "Experiments with a heuristic compiler"); Manna and Waldinger, [1971](#bib.bib100 "Toward automatic program synthesis")), spurred by the presence of code in large datasets (Husain et al., [2019](#bib.bib40 "CodeSearchNet challenge: evaluating the state of semantic code search"); Gao et al., [2020](#bib.bib94 "The pile: an 800gb dataset of diverse text for language modeling")) and the resulting programming capabilities of language models trained on these datasets (Wang and Komatsuzaki, [2021](#bib.bib96 "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model")). Popular language modeling objectives like masked language modeling (Devlin et al., [2018](#bib.bib27 "Bert: pre-training of deep bidirectional transformers for language understanding")) and span prediction (Raffel et al., [2020](#bib.bib33 "Exploring the limits of transfer learning with a unified text-to-text transformer")) have also been adapted to train their programming counterparts CodeBERT (Feng et al., [2020](#bib.bib13 "CodeBERT: a pre-trained model for programming and natural languages")) and PyMT5 (Clement et al., [2020](#bib.bib14 "PyMT5: multi-mode translation of natural language and python code with transformers")). Similarly, our early investigation of GPT-3 (Brown et al., [2020](#bib.bib34 "Language models are few-shot learners")) revealed that it could generate simple programs from Python docstrings. While rudimentary, this capability was exciting because GPT-3 was not explicitly trained for code generation. Given the considerable success of large language models in other modalities and the abundance of publicly available code, we hypothesized that a specialized GPT model, called Codex, could excel at a variety of coding tasks. This paper describes several early Codex models, whose descendants power GitHub Copilot and the Codex models in the OpenAI API. ![Pass rates of our models on the HumanEval dataset as a function of model size. When a single sample is generated for each problem, GPT-12B solves no problems, but Codex (fine-tuned on code) solves 28.8% of the problems, and Codex-S (further fine-tuned on correctly implemented standalone functions) solves 37.7% of the problems. From here, further gains can be realized by generating 100 samples per problem and selecting the sample with the highest mean log-probability (44.5% solved) or by selecting the sample that passes the unit tests (77.5% solved). All samples are generated with temperature 0.8.](https://media.arxiv-vanity.com/render-output/7955161/figs/codex-main.png) Figure 1: Pass rates of our models on the HumanEval dataset as a function of model size. When a single sample is generated for each problem, GPT-12B solves no problems, but Codex (fine-tuned on code) solves 28.8% of the problems, and Codex-S (further fine-tuned on correctly implemented standalone functions) solves 37.7% of the problems. From here, further gains can be realized by generating 100 samples per problem and selecting the sample with the highest mean log-probability (44.5% solved) or by selecting the sample that passes the unit tests (77.5% solved). All samples are generated with temperature 0.8. In this work, we focus on the task of generating standalone Python functions from docstrings, and evaluate the correctness of code samples automatically through unit tests. This is in contrast to natural language generation, where samples are typically evaluated by heuristics or by human evaluators. To accurately benchmark our model, we create a dataset of 164 original programming problems with unit tests. These problems assess language comprehension, algorithms, and simple mathematics, with some comparable to simple software interview questions. We release this data along with an evaluation framework at <https://www.github.com/openai/human-eval>. To solve a problem in our test set, we generate multiple samples from the models, and check if any of them pass the unit tests. With just a single sample, a 12B parameter Codex solves 28.8% of these problems, and a 300M parameter Codex solves 13.2% of these problems. In contrast, the 6B parameter GPT-J (Wang and Komatsuzaki, [2021](#bib.bib96 "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model")) achieves 11.4% on the same dataset, while all GPT models achieve near 0%. To improve our model’s performance at the task of function synthesis from docstrings, we fine-tune Codex on standalone, correctly implemented functions. The resulting model, Codex-S, solves 37.7% of problems with a single sample. Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Evaluating Large Language Models Trained on Code") showcases problems of varying difficulty in our dataset, along with correct model generated solutions. ![Three example problems from the HumanEval dataset, where the probabilities that a single sample from Codex-12B passes unit tests are 0.9, 0.17, and 0.005. The prompt provided to the model is shown with a white background, and a successful model-generated completion is shown in a yellow background. Though not a guarantee for problem novelty, all problems were hand-written and not programmatically copied from existing sources. Random problems and samples can be found in Appendix ](https://media.arxiv-vanity.com/render-output/7955161/figs/codex-figurehead.png) Figure 2: Three example problems from the HumanEval dataset, where the probabilities that a single sample from Codex-12B passes unit tests are 0.9, 0.17, and 0.005. The prompt provided to the model is shown with a white background, and a successful model-generated completion is shown in a yellow background. Though not a guarantee for problem novelty, all problems were hand-written and not programmatically copied from existing sources. Random problems and samples can be found in Appendix [B](#A2 "Appendix B Random Problems and Solutions from Codex-12B ‣ Evaluating Large Language Models Trained on Code"). Real-world programming tasks often involve iterations of approaches and bug fixes, which is approximated by generating many samples from our models and selecting one that passes all unit tests. Within 100 samples, Codex-S is able to generate at least one correct function for 77.5% of the problems. This result suggests that accurate code samples can be selected via heuristic ranking instead of fully evaluating each sample, the latter of which may not be possible or practical in deployment. Indeed, we find that the sample with highest mean log-probability passes unit tests for 44.5% of the problems. We conclude by discussing the limitations and potential broader impacts of these Codex models and of increasingly powerful code generating models more generally. 2 Evaluation Framework ----------------------- In this section, we discuss the details of our evaluation framework. We begin by defining the pass@k metric, and explain its advantages over standard match-based metrics. Next, we describe the dataset of hand-written problems, called “HumanEval,” which we created in order to benchmark our models. Finally, we discuss the sandbox environment we used to safely execute model-generated code. ### 2.1 Functional Correctness Generative models for code are predominantly benchmarked by matching samples against a reference solution, where the match can be exact or fuzzy (as in BLEU score). However, recent work has surfaced deficiencies in match-based metrics for code. For instance, Ren et al. ([2020](#bib.bib15 "Codebleu: a method for automatic evaluation of code synthesis")) finds that BLEU has problems capturing semantic features specific to code, and suggests several semantic modifications to the score. More fundamentally, match-based metrics are unable to account for the large and complex space of programs functionally equivalent to a reference solution. As a consequence, recent works in unsupervised code translation (Lachaux et al., [2020](#bib.bib35 "Unsupervised translation of programming languages")) and pseudocode-to-code translation (Kulal et al., [2019](#bib.bib29 "SPoC: search-based pseudocode to code")) have turned to functional correctness instead, where a sample is considered correct if it passes a set of unit tests. We argue that this metric should be applied to docstring-conditional code generation as well. Perhaps the most convincing reason to evaluate functional correctness is that it is used by human developers to judge code. A framework known as test-driven development dictates that software requirements be converted into test cases before any implementation begins, and success is defined by a program that passes these tests. While few organizations employ full test-driven development, integration of new code is usually dependent on creating and passing unit tests. Kulal et al. ([2019](#bib.bib29 "SPoC: search-based pseudocode to code")) evaluate functional correctness using the pass@k metric, where k code samples are generated per problem, a problem is considered solved if any sample passes the unit tests, and the total fraction of problems solved is reported. However, computing pass@k in this way can have high variance. Instead, to evaluate pass@k, we generate n≥k samples per task (in this paper, we use n=200 and k≤100), count the number of correct samples c≤n which pass unit tests, and calculate the unbiased estimator | | | | | | | --- | --- | --- | --- | --- | | | pass@k | :=EProblems[1−(n−c)k(nk)] | | (1) | Calculating this estimator directly results in very large numbers and numerical instability. In Figure [3](#S2.F3 "Figure 3 ‣ 2.1 Functional Correctness ‣ 2 Evaluation Framework ‣ Evaluating Large Language Models Trained on Code"), we include a numerically stable numpy implementation that simplifies the expression and evaluates the product term-by-term. One may be tempted to estimate pass@k with 1−(1−^p)k where ^p is the empirical estimate of pass@1, but we show that it is biased in Appendix [A](#A1 "Appendix A Estimating pass@k ‣ Evaluating Large Language Models Trained on Code"). [⬇](http://data:text/plain;base64,ZGVmIHBhc3NfYXRfayhuLCBjLCBrKToKICAgICIiIgogICAgOnBhcmFtIG46IHRvdGFsIG51bWJlciBvZiBzYW1wbGVzCiAgICA6cGFyYW0gYzogbnVtYmVyIG9mIGNvcnJlY3Qgc2FtcGxlcwogICAgOnBhcmFtIGs6IGsgaW4gcGFzc0AkayQKICAgICIiIgogICAgaWYgbiAtIGMgPCBrOiByZXR1cm4gMS4wCiAgICByZXR1cm4gMS4wIC0gbnAucHJvZCgxLjAgLSBrIC8KICAgICAgICBucC5hcmFuZ2UobiAtIGMgKyAxLCBuICsgMSkp) def pass\_at\_k(n, c, k):     """ ␣␣␣␣:param␣n:␣total␣number␣of␣samples ␣␣␣␣:param␣c:␣number␣of␣correct␣samples ␣␣␣␣:param␣k:␣k␣in␣pass@$k$ ␣␣␣␣"""     if n - c < k: return 1.0     return 1.0 - np.prod(1.0 - k /         np.arange(n - c + 1, n + 1)) Figure 3: A numerically stable script for calculating an unbiased estimate of pass@k. Later, we provide evidence that BLEU score may not be a reliable indicator of functional correctness by showing that functionally inequivalent programs generated by our model (which are guaranteed to disagree with the reference solution on some input) often have higher BLEU scores than functionally equivalent ones. ### 2.2 HumanEval: Hand-Written Evaluation Set We evaluate functional correctness on a set of 164 hand-written programming problems, which we call the HumanEval dataset. Each problem includes a function signature, docstring, body, and several unit tests, with an average of 7.7 tests per problem. It is important for these tasks to be hand-written, since our models are trained on a large fraction of GitHub, which already contains solutions to problems from a variety of sources. For example, there are more than ten public repositories containing solutions to Codeforces problems, which make up part of the recently proposed APPS dataset (Hendrycks et al., [2021](#bib.bib21 "Measuring coding challenge competence with apps")). Programming tasks in the HumanEval dataset assess language comprehension, reasoning, algorithms, and simple mathematics. We release the HumanEval dataset so that others can evaluate functional correctness and measure the problem-solving capabilities of their models. The dataset can be found at <https://www.github.com/openai/human-eval>. ### 2.3 Sandbox for Executing Generated Programs Since publicly available programs have unknown intent and generated programs are often incorrect, executing these programs poses a security risk. Indeed, GitHub is known to contain malicious programs that alter or change their environments (Rokon et al., [2020](#bib.bib82 "SourceFinder: finding malware source-code from publicly available repositories in github")). Therefore, we developed a sandbox environment to safely run untrusted programs against unit tests. Our goals were to prevent these programs from modifying, gaining persistence on, accessing sensitive resources on, or exfiltrating data from a host or network. Since OpenAI’s training infrastructure is built on Kubernetes and cloud services, we designed our sandbox to address the limitations of these environments while remaining idiomatic with their patterns of use. We selected the gVisor container runtime (Lacasse, [2018](#bib.bib10 "Open-sourcing gvisor, a sandboxed container runtime")) as the main host protection component. Since container runtimes like Docker can share host resources with containers, a malicious container could potentially compromise a host. gVisor protects the host by emulating its resources to introduce a security boundary between the host and its containers. Network-adjacent hosts and services are protected by eBPF-based firewall rules that prevent inbound and outbound connections except for those required for experiment control. 3 Code Fine-Tuning ------------------- We fine-tune GPT models containing up to 12B parameters on code to produce Codex. In contrast with GPT, Codex displays non-trivial performance on the HumanEval dataset. In fact, Codex is able to solve the majority of the problems in HumanEval if we generate and evaluate 100 samples per problem, and pick one that passes unit tests. When limited to a budget of one evaluation per problem, producing multiple samples with Codex and choosing the one with the highest mean log-probability provides significant gains. ### 3.1 Data Collection Our training dataset was collected in May 2020 from 54 million public software repositories hosted on GitHub, containing 179 GB of unique Python files under 1 MB. We filtered out files which were likely auto-generated, had average line length greater than 100, had maximum line length greater than 1000, or contained a small percentage of alphanumeric characters. After filtering, our final dataset totaled 159 GB. ### 3.2 Methods Since Codex is evaluated on natural language prompts, we hypothesized that it would be beneficial to fine-tune from the GPT-3 (Brown et al., [2020](#bib.bib34 "Language models are few-shot learners")) model family, which already contains strong natural language representations. Surprisingly, we did not observe improvements when starting from a pre-trained language model, possibly because the fine-tuning dataset is so large. Nevertheless, models fine-tuned from GPT converge more quickly, so we apply this strategy for all subsequent experiments. We train Codex using the same learning rate as the corresponding GPT model, with a 175 step linear warmup and cosine learning rate decay. We train for a total of 100 billion tokens, using the Adam optimizer with β1=0.9, β2=0.95, ϵ=10−8, and a weight decay coefficient of 0.1. In order to maximally leverage text representations from GPT, we base our code lexer on the GPT-3 text tokenizer. Since the distribution of words in GitHub code differs from that of natural text, this tokenizer is not very effective for representing code. The largest source of inefficiency arises from encoding whitespace, so we add an additional set of tokens for representing whitespace runs of different lengths. This allows us to represent code using approximately 30% fewer tokens. To compute pass@k, we assemble each HumanEval problem into a prompt consisting of a header, a signature, and a docstring, which is illustrated in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Evaluating Large Language Models Trained on Code"). We sample tokens from Codex until we encounter one of the following stop sequences: ‘\nclass’, ‘\ndef’, ‘\n#’, ‘\nif’, or ‘\nprint’, since the model will continue generating additional functions or statements otherwise. We use nucleus sampling (Holtzman et al., [2020](#bib.bib133 "The curious case of neural text degeneration")) with top p=0.95 for all sampling evaluation in this work. ### 3.3 Results ![Model cross-entropy test loss measured on a held-out split of our Python GitHub code corpus. The smooth power law scaling of performance with model size observed in GPT-3 appears to hold even after code fine-tuning.](https://media.arxiv-vanity.com/render-output/7955161/figs/ft-test-loss-vs-size.png) Figure 4: Model cross-entropy test loss measured on a held-out split of our Python GitHub code corpus. The smooth power law scaling of performance with model size observed in GPT-3 appears to hold even after code fine-tuning. In Figure [4](#S3.F4 "Figure 4 ‣ 3.3 Results ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code"), we plot test loss on a held-out validation set against Codex model size. We find that just as language model test loss follows a power law in model size (Kaplan et al., [2020](#bib.bib132 "Scaling laws for neural language models")), test loss after code fine-tuning follows a similar power law with functional form (N5.92×107)−0.13 where N is the number of non-embedding parameters in the model. When evaluating pass@k, it is important to optimize sampling temperature for the particular value of k. In Figure [5](#S3.F5 "Figure 5 ‣ 3.3 Results ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code"), we plot pass@k against the number of samples k and the sampling temperature. We find that higher temperatures are optimal for larger k, because the resulting set of samples has higher diversity, and the metric rewards only whether the model generates any correct solution. In particular, for a 679M parameter model, the optimal temperature for pass@1 is T∗=0.2 and the optimal temperature for pass@100 is T∗=0.8. With these temperatures, we find that pass@1 and pass@100 scale smoothly as a function of model size (Figure [6](#S3.F6 "Figure 6 ‣ 3.3 Results ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code")). ![In the top panel, we plot pass@](https://media.arxiv-vanity.com/render-output/7955161/figs/ft-pass-vs-temp.png) Figure 5: In the top panel, we plot pass@k against the number of samples (k) for various temperature settings. Higher temperatures are better when the number of samples is large, likely due to the increased sample diversity. In the bottom panel, we plot the best temperature setting for each k, obtained by taking the upper hull of the top panel. ![Using the optimal temperatures 0.2 and 0.8 for pass@1 and pass@100, we plot these two metrics as a function of model size. Performance appears to scale smoothly as a sigmoid in log-parameters.](https://media.arxiv-vanity.com/render-output/7955161/figs/ft-pass-vs-size.png) Figure 6: Using the optimal temperatures 0.2 and 0.8 for pass@1 and pass@100, we plot these two metrics as a function of model size. Performance appears to scale smoothly as a sigmoid in log-parameters. Pass@k can also be interpreted as the result of evaluating the best out of k samples, where the best sample is picked by an oracle with prior knowledge of the unit tests. From a practical perspective, we are also interested in the setting where we must select a single sample from k samples without having access to an oracle. For instance, when the model is used as an autocomplete tool where a user provides a prompt, we do not have unit tests, but would like to return only a single completion to the user for evaluation so as to not overwhelm them. Inspired by similar work in language modeling, we find that choosing the sample with the highest mean token log probability outperforms evaluating a random sample, while choosing the sample based on sum log probability can perform slightly worse than picking randomly. Figure [7](#S3.F7 "Figure 7 ‣ 3.3 Results ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code") demonstrates the benefits of applying these heuristics to samples (at temperature 0.8) from Codex-12B. ![Model performance in the setting where we can generate multiple samples, but only evaluate one. We can do better than randomly selecting a sample by choosing the solution with the highest mean log-probability (red) or with the highest back-translation score (orange) described in Sec. ](https://media.arxiv-vanity.com/render-output/7955161/figs/sample-ranking.png) Figure 7: Model performance in the setting where we can generate multiple samples, but only evaluate one. We can do better than randomly selecting a sample by choosing the solution with the highest mean log-probability (red) or with the highest back-translation score (orange) described in Sec. [5](#S5 "5 Docstring Generation ‣ Evaluating Large Language Models Trained on Code"). The blue line represents the theoretical best performance obtained using an oracle with prior knowledge of the unit tests. Finally, we compute BLEU scores for all Codex-12B HumanEval samples (at temperature 0.8) against their reference solutions. For each problem, when we plot the distributions of BLEU scores for correct and incorrect solutions, we notice significant overlap (Figure [8](#S3.F8 "Figure 8 ‣ 3.3 Results ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code")). Since an incorrect solution is guaranteed to be functionally inequivalent to the reference solution, we conclude that improvements in BLEU score may not indicate improved rates of functional correctness in practice. ![BLEU score probability densities for correct (blue) and wrong (green) solutions from Codex-12B for 4 random tasks from HumanEval. Note that the distributions are not cleanly separable, suggesting that optimizing for BLEU score is not equivalent to optimizing for functional correctness.](https://media.arxiv-vanity.com/render-output/7955161/figs/bleu-dist.png) Figure 8: BLEU score probability densities for correct (blue) and wrong (green) solutions from Codex-12B for 4 random tasks from HumanEval. Note that the distributions are not cleanly separable, suggesting that optimizing for BLEU score is not equivalent to optimizing for functional correctness. ### 3.4 Comparative Analysis of Related Models and Systems Two recent works similar in spirit to Codex are GPT-Neo (Black et al., [2021](#bib.bib95 "GPT-Neo: large scale autoregressive language modeling with mesh-tensorflow")) and GPT-J (Wang and Komatsuzaki, [2021](#bib.bib96 "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model")), which are trained on The Pile (Gao et al., [2020](#bib.bib94 "The pile: an 800gb dataset of diverse text for language modeling")), a dataset containing text from a variety of sources as well as 8% GitHub code. The broader research community has found that these models outperform existing GPT systems in qualitative programming evaluations (Woolf, [2021](#bib.bib57 "Fun and dystopia with ai-based code generation using gpt-j-6b")). We confirm these findings using the HumanEval dataset, showing that GPT-Neo achieves 6.4% pass@1 and 21.3% pass@100, while GPT models of comparable sizes achieve near 0% on both metrics. We see a remarkable progression in capabilities, with GPT-Neo-2.7B roughly equivalent to Codex-85M (30× fewer parameters). Similarly, GPT-J-6B achieves 11.6% pass@1 and 27.7% pass@100, which is roughly equivalent to Codex-300M (20× fewer parameters). Pass rates are obtained by taking the best result from evaluating at temperatures 0.2, 0.4, and 0.8 for GPT-Neo, and from temperatures 0.2 and 0.8 for GPT-J. Detailed results across multiple model sizes can be found in Table [1](#S3.T1 "Table 1 ‣ 3.4 Comparative Analysis of Related Models and Systems ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code"). Finally, we benchmark Codex against the largest free model from Tabnine, a leading code autocomplete system, which achieves 2.6% pass@1 (at T=0.4) and 7.6% pass@100 (at T=0.8). This is roughly equivalent to Codex-12M, one of the smallest models in our suite. | | pass@k | | --- | --- | | | k=1 | k=10 | k=100 | | GPT-Neo 125M | 0.75% | 1.88% | 2.97% | | GPT-Neo 1.3B | 4.79% | 7.47% | 16.30% | | GPT-Neo 2.7B | 6.41% | 11.27% | 21.37% | | GPT-J 6B | 11.62% | 15.74% | 27.74% | | TabNine | 2.58% | 4.35% | 7.59% | | Codex-12M | 2.00% | 3.62% | 8.58% | | Codex-25M | 3.21% | 7.1% | 12.89% | | Codex-42M | 5.06% | 8.8% | 15.55% | | Codex-85M | 8.22% | 12.81% | 22.4% | | Codex-300M | 13.17% | 20.37% | 36.27% | | Codex-679M | 16.22% | 25.7% | 40.95% | | Codex-2.5B | 21.36% | 35.42% | 59.5% | | Codex-12B | 28.81% | 46.81% | 72.31% | Table 1: Codex, GPT-Neo, & TabNine evaluations for HumanEval. We find that GPT-J pass@1 is between Codex-85M and Codex-300M performance. ### 3.5 Results on the APPS Dataset Recently, Hendrycks et al. ([2021](#bib.bib21 "Measuring coding challenge competence with apps")) introduced the APPS dataset to measure the coding challenge competence of language models. The APPS dataset consists of 5000 training and 5000 test examples of coding problems, each with a set of unit tests and, for the training data, a set of correct solutions. Most of the APPS tests problems are not formulated as single-function synthesis tasks, but rather as full-program synthesis, reading input from stdin and printing output to stdout, in contrast to the main Codex training data. In the paper that introduces APPS, the authors benchmark a few language models and report two metrics: the percentage of problems where the model finds a correct solution (called the “strict accuracy”) and the percentage of unit tests passed, even if the solution is incorrect. The latter measure is reported only so as to reduce variance of the measurements, because the results on the first metric were so low. We avoid this metric and only focus on “strict accuracy”, and - as in the previous sections - we report pass@k numbers for various k (Table [2](#S3.T2 "Table 2 ‣ 3.5 Results on the APPS Dataset ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code")). There are 2 additional factors, well-known from coding competitions, that we take into account: * In coding competitions and in the APPS datasets, tasks are provided with 3 input/output examples included in the task description. We utilize this by sampling 1000 solutions from the model and filtering out only those that pass these 3 unit tests (if such solutions exist). We then calculate pass rates in this filtered set, and call it filtered pass@k. Results without filtering are presented as raw pass@k. * It is often the case both in coding competitions and in the results from Codex that a correct solution is found, but it is not algorithmically efficient enough to be considered passing. While this is not acceptable in the competitions, we also report the number of solutions that Codex produces that do not fail on any unit test, but that do time-out on some of them. We use a timeout of 3 seconds in our evaluation. To compensate for the fact the Codex is not fine-tuned on APPS, we append a single input/output example from the task description to the docstring as a formatting hint. We denote this setting as “1-shot” in Table [2](#S3.T2 "Table 2 ‣ 3.5 Results on the APPS Dataset ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code"), and find that Codex-12B evaluated 1-shot achieves comparable performance to a GPT-Neo model fine-tuned on APPS. Consistent with our earlier findings, there are large benefits from generating and evaluating as many as 1000 samples per task, though for more difficult problems, solutions are often not efficient enough to pass the time limits. Finally, evaluating the first sample which passes the 3 public unit tests for each problem yields higher performance than raw@100 samples. | | Introductory | Interview | Competition | | --- | --- | --- | --- | | GPT-Neo 2.7B raw pass@1 | 3.90% | 0.57% | 0.00% | | GPT-Neo 2.7B raw pass@5 | 5.50% | 0.80% | 0.00% | | 1-shot Codex raw pass@1 | 4.14% (4.33%) | 0.14% (0.30%) | 0.02% (0.03%) | | 1-shot Codex raw pass@5 | 9.65% (10.05%) | 0.51% (1.02%) | 0.09% (0.16%) | | 1-shot Codex raw pass@100 | 20.20% (21.57%) | 2.04% (3.99%) | 1.05% (1.73%) | | 1-shot Codex raw pass@1000 | 25.02% (27.77%) | 3.70% (7.94%) | 3.23% (5.85%) | | 1-shot Codex filtered pass@1 | 22.78% (25.10%) | 2.64% (5.78%) | 3.04% (5.25%) | | 1-shot Codex filtered pass@5 | 24.52% (27.15%) | 3.23% (7.13%) | 3.08% (5.53%) | Table 2: Finetuned GPT-Neo numbers from the APPS paper referenced above. For Codex-12B, the number of passing programs that timeout on some test is in the bracket. We used temperature 0.6 for sampling to cover all k in pass@k, so raw pass@1 results could be improved with lower temperature. 4 Supervised Fine-Tuning ------------------------- In addition to standalone functions, Python code found on GitHub contains class implementations, configuration files, scripts, and even files used to store data. This code is seemingly unrelated to synthesizing functions from docstrings, and we hypothesize that the distribution mismatch reduces HumanEval performance. In order to adapt Codex to the distribution of the task of interest, we construct a set of training problems from correctly implemented standalone functions, and use them for additional supervised fine-tuning. We describe two approaches for collecting these examples: from competitive programming websites and from repositories with continuous integration. We call the supervised fine-tuned models Codex-S, and show that they produce consistent gains across model size. ### 4.1 Problems from Competitive Programming Programming contest and interview preparation websites use hidden unit tests to automatically judge the functional correctness of submissions. These problems are self-contained, come with well-written problem statements, and generally have excellent test coverage. Additionally, these problems test algorithmic reasoning over a broad range of core skills and difficulties. We collected problem statements, function signatures, and solutions from several popular programming contest and interview preparation websites. We then assembled these into programming tasks similar to HumanEval, using the problem description as the docstring. Since complete test suites are often hidden, we created unit tests from examples found in the problem statements, or extracted additional test cases through submitting incorrect solutions. In total, we curated 10,000 problems in this way. ### 4.2 Problems from Continuous Integration Next, we curated programming problems from open source projects. Taking advantage of sys.setprofile, we were able to trace and collect inputs and outputs for all functions called during integration tests. This data could then be used to create unit tests for the functions. Projects that employ continuous integration (CI) are ideal candidates for tracing. We follow the commands in the CI configuration files, which contain build and test commands, to set up the virtual environments, install dependencies, and run integration tests. We considered GitHub repos using travis and tox as their CI frameworks, as they are two of the most popular CI tools. We additionally used publicly available source code from pip packages found in the python package index (PyPI). Because these projects contained untrusted code, it was important to run integration tests in the sandboxed environment described above. While there are millions of potential functions to curate problems from, we only collected about 40,000 because not all functions accept inputs and return outputs. Even when they do, most objects captured at runtime cannot be pickled and restored outside the sandbox unless the project was installed. Since our tracing methodology produced inputs and outputs for all invoked functions, even builtin and library calls imported by the project were turned into problems. For this reason, functions from tracing tended to be the building blocks of command-line utilities. To excel at these tasks, the model does not need to know advanced algorithms and data structures. Rather, it needs to be able to follow instructions to implement the functionality specified in the docstring. Thus, tracing complements the puzzle nature of coding competition problems and broadens the distribution of tasks. ### 4.3 Filtering Problems In the previous sections, we presented two methods we used to automatically create training problems. However, it is unclear how to control for quality. Some prompts underspecify the function that is implemented, in which case a perfectly valid solution may be wrongly penalized by the unit test. Some problems are stateful, and subsequent executions can result in different outcomes. To address these issues, we use Codex-12B to generate 100 samples per curated problem. If no samples pass the unit tests, we consider the task to be either ambiguous or too difficult, and filter it out. We reran this verification several times to remove stateful or non-deterministic problems. ### 4.4 Methods We fine-tune Codex on these training problems to produce a set of “supervised fine-tuned” models, which we call Codex-S. To produce examples from training problems, we assemble the problems into the format shown in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Evaluating Large Language Models Trained on Code"). If there are prompts of varying length in a batch, we left-pad shorter prompts to the length of the longest prompt, so that the first tokens in the reference solutions line up in context. We train to minimize negative log-likelihood of the reference solution, and mask out loss for any tokens in the prompt. We train using a learning rate 1/10 as large as used for fine-tuning Codex, but adhere to the same learning rate schedule, and train until validation loss plateaus (less than 10B tokens). ### 4.5 Results As with Codex, we first compute the optimal temperature for evaluating pass@k for 1≤k≤100. We find that Codex-S prefers slightly higher temperatures for all k>1, which possibly reflects the fact that Codex-S captures a narrower distribution than Codex. We use T∗=0 for computing pass@1 and T∗=1 for computing pass@100. ![Optimal sampling temperatures as a function of the number of samples generated for both Codex and Codex-S. Codex-S generally requires a higher temperature for any particular value of ](https://media.arxiv-vanity.com/render-output/7955161/figs/sup-pass-vs-temp.png) Figure 9: Optimal sampling temperatures as a function of the number of samples generated for both Codex and Codex-S. Codex-S generally requires a higher temperature for any particular value of k, possibly to compensate for the fact that it models a narrower distribution. Next, we compare Codex-S against Codex on pass@1 and pass@100. Codex-S outperforms the corresponding Codex by an average margin of 6.5 percentage points on pass@1 and by a larger average margin of 15.1 percentage points on pass@100 across model size. We also plot the performance of different sample selection heuristics for Codex-S-12B against the same heuristics for Codex-12B. When ranking between 1 and 100 samples by mean log probability, the average benefit over random ranking is 11.6 percentage points, which is over 2 percentage points higher than the corresponding benefit for Codex. ![Comparing Codex-S against Codex on the metrics proposed in Section ](https://media.arxiv-vanity.com/render-output/7955161/figs/codexs-v-codex.png) Figure 10: Comparing Codex-S against Codex on the metrics proposed in Section [3](#S3 "3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code"). Codex-S is one or two orders of magnitude more parameter efficient on pass@1 and pass@100, and log-prob sample ranking with Codex-S yields similar benefits over random sampling that Codex does. 5 Docstring Generation ----------------------- Generating code from docstrings is possible with Codex because code typically follows after a docstring, but it is not easy to induce Codex to generate docstrings from code. Nevertheless, we are motivated to produce a docstring writing model for safety reasons, as such a model can be used to describe the intent behind generated code. Using the training problems described in the previous section, we can easily create a training dataset for code-conditional docstring generation. Specifically, for each training problem, we assemble a training example by concatenating the function signature, the reference solution, and then the docstring. Just as we train Codex-S by minimizing negative log-likelihood of the reference solution, we train the docstring generating models Codex-D by minimizing negative log-likelihood of the docstring. When we benchmark our code generation models, we measure pass@k on the HumanEval dataset, where correctness is defined by passing a set of unit tests. However, there is no similar way to evaluate docstring samples automatically. Therefore, we grade sample docstrings by hand, considering a docstring correct if it uniquely and accurately specifies the code body. Due to the time consuming nature of this process, we only grade 10 samples per problem, for a total of 1640 problems, from Codex-D-12B at temperature 0.8. Codex-D often generates incorrect unit tests along with a docstring, but we ignore these during grading. However, we do not consider the docstring correct when the model simply copies the code body into the docstring. The most common failure modes we observe are when the docstring model leaves out an important detail (such as “an answer must be to two decimal places”) or when it over-conditions on the function name and invents a problem unrelated to the function body. As shown in Table [3](#S5.T3 "Table 3 ‣ 5 Docstring Generation ‣ Evaluating Large Language Models Trained on Code"), pass rates for Codex-D are lower but comparable to the corresponding pass rates for Codex-S at the same temperature. We do not have a strong hypothesis for which direction should yield higher pass rates. While generating docstrings may be more forgiving because natural language syntax is less strict than code syntax, docstrings in our dataset may be lower quality because developers tend to devote less time to writing docstrings. Indeed, our model produces docstrings like “I just found this function online” and “This test is not correctly written and it’s not my solution.” | Model | pass@1 | pass@10 | | --- | --- | --- | | Codex-S-12B | 32.2% | 59.5% | | Codex-D-12B | 20.3% | 46.5% | Table 3: Pass rates for our docstring generating model Codex-D, which is evaluated by hand-grading 10 samples per task due to the lack of a ground-truth automatic evaluation. We find similar but lower pass-rates compared to Codex-S. Finally, with a docstring model, we have yet another way to choose a single sample from a set of k samples. Instead of picking the sample with the best mean log probability as investigated in the previous two sections, we can choose the sample that maximizes the back-translation objective P(ground truth docstring|generated sample) where P is evaluated using Codex-D. Unfortunately, in Figure [7](#S3.F7 "Figure 7 ‣ 3.3 Results ‣ 3 Code Fine-Tuning ‣ Evaluating Large Language Models Trained on Code"), we show that ranking samples via back-translation underperforms mean log-probability ranking, though it outperforms random ranking. This heuristic also appears to overfit quickly. 6 Limitations -------------- While Codex is able to sample correct solutions for the majority of HumanEval problems, we find that it has a number of limitations. First, Codex is not sample efficient to train. Our training dataset comprises a significant fraction of publicly available Python code on GitHub, totaling hundreds of millions of lines of code. Even seasoned developers do not encounter anywhere near this amount of code over their careers. Indeed, a strong student who completes an introductory computer science course is expected to be able to solve a larger fraction of problems than Codex-12B. Next, we explore prompts on which Codex is likely to fail or display counter-intuitive behavior. While evaluating code generation is well-studied (Xu et al., [2021](#bib.bib46 "In-ide code generation from natural language: promise and challenges"); Helmuth and Spector, [2015](#bib.bib53 "General program synthesis benchmark suite"); Pantridge et al., [2017](#bib.bib54 "On the difficulty of benchmarking inductive program synthesis methods")), many existing metrics measure performance in tightly specified, constrained problem instances (e.g., string manipulation in FlashFill (Gulwani, [2011](#bib.bib59 "Automating string processing in spreadsheets using input-output examples"))). Therefore, we developed a set of qualitative metrics for measuring the capabilities of code generating models while controlling for the complexity and abstraction level of the specifications (Appendix [D](#A4 "Appendix D Details of Specification-based Evaluation Framework ‣ Evaluating Large Language Models Trained on Code")). Applying this framework, we find that Codex can recommend syntactically incorrect or undefined code, and can invoke functions, variables, and attributes that are undefined or outside the scope of the codebase. Moreover, Codex struggles to parse through increasingly long and higher-level or system-level specifications. To concretely illustrate model performance degradation as docstring length increases, we create a dataset of synthetic problems assembled from 13 basic building blocks, each of which modifies an input string in a deterministic way. Example building blocks are “convert the string to lowercase” or “remove every third character from the string” (the full list is described in Appendix [C](#A3 "Appendix C Building Blocks for Synthetic Tasks ‣ Evaluating Large Language Models Trained on Code")). We find that as the number of chained building blocks in the docstring increases, model performance decreases exponentially. This behavior is uncharacteristic of a human programmer, who should be able to correctly implement a program for a chain of arbitrary length if they can do so for a chain of length two. ![Pass rates of Codex-12B samples against the number of chained components in the synthetically generated docstring. With each additional component, pass rate drops by roughly a factor of 2-3.](https://media.arxiv-vanity.com/render-output/7955161/figs/codex-synth.png) Figure 11: Pass rates of Codex-12B samples against the number of chained components in the synthetically generated docstring. With each additional component, pass rate drops by roughly a factor of 2-3. Further, just as text-conditional generative models in other modalities (Ramesh et al., [2021](#bib.bib93 "Zero-shot text-to-image generation")) have difficulty with binding attributes to objects, Codex can make mistakes binding operations to variables, especially when the number of operations and variables in the docstring is large. For instance, in the following prompt, Codex-12B does not decrement the variable w and also fails to return the product of all numbers. [⬇](http://data:text/plain;base64,ZGVmIGRvX3dvcmsoeCwgeSwgeiwgdyk6CiAgICAiIiIgQWRkIDMgdG8geSwgdGhlbiBzdWJ0cmFjdCA0CiAgICBmcm9tIGJvdGggeCBhbmQgdy4gUmV0dXJuIHRoZQogICAgcHJvZHVjdCBvZiB0aGUgZm91ciBudW1iZXJzLiAiIiIKICAgIHQgPSB5ICsgMwogICAgdSA9IHggLSA0CiAgICB2ID0geiAqIHcKICAgIHJldHVybiB2) def do\_work(x, y, z, w):     """␣Add␣3␣to␣y,␣then␣subtract␣4 ␣␣␣␣from␣both␣x␣and␣w.␣Return␣the ␣␣␣␣product␣of␣the␣four␣numbers.␣"""     t = y + 3     u = x - 4     v = z \* w     return v This understanding of Codex’s limited system-level synthesis capabilities helps inform our assessment of the potential hazards of using it in a generative capacity, as well as the broader societal impacts that such systems could have. 7 Broader Impacts and Hazard Analysis -------------------------------------- Codex has the potential to be useful in a range of ways. For example, it could help onboard users to new codebases, reduce context switching for experienced coders, enable non-programmers to write specifications and have Codex draft implementations, and aid in education and exploration. However, Codex also raises significant safety challenges, does not always produce code that is aligned with user intent, and has the potential to be misused. To better understand some of the hazards of using Codex in a generative capacity, we conducted a hazard analysis focused on identifying risk factors (Leveson, [2019](#bib.bib84 "Improving the standard risk matrix: part 1")) with the potential to cause harm.111We sought to include harms spanning geographic and temporal scales. We also considered not only the severity and probability, but also the distribution of harms. However, we note that the analysis described here is only one milestone in what we hope will be a larger cross-sectoral and cross-organizational effort to steer code generation in a societally beneficial direction. As we describe our findings, we note various specific uncertainties and areas for future work in different sections. We outline some of our key findings across several risk areas below. While some of our findings about the potential societal impacts of code generation systems were informed by work towards responsible deployment of the production-oriented Codex models (which descended from the research-oriented Codex models described in this paper), this section is not intended to provide a full account of any particular product’s safety features. Unless otherwise specified, we anchor our analysis in the specific properties of the models described in this paper. We share this analysis in the belief that some of it generalizes to the broader class of code generation systems, and to encourage a norm of performing detailed impact analysis as part of major machine learning research projects. Note that by focusing largely on risks in this section, we do not mean to imply that we expect the impact of this class of technologies to be net-negative; rather, risks merit particular attention here because they may be subtle or require deliberate effort to address, whereas we expect the benefits to be more obvious and “automatic” from the perspective of most users and affected stakeholders. ### 7.1 Over-reliance One of the key risks associated with using code generation models in practice is over-reliance on generated outputs. Due to the limitations described above as well as alignment issues described below, Codex may suggest solutions that superficially appear correct but do not actually perform the task the user intended. This could particularly affect novice programmers, and could have significant safety implications depending on the context. We discuss a related issue in Appendix [G](#A7 "Appendix G Supplemental security analysis ‣ Evaluating Large Language Models Trained on Code"), namely that code generation models can suggest insecure code. For these reasons, human oversight and vigilance is required for safe use of code generation systems like Codex. We note several immediate ways to improve safety in the subsection on risk mitigation below, though over-reliance in particular is one that we believe merits further inquiry in industry and academia. While it is conceptually straightforward to provide documentation to users reminding them about model limitations, empirical investigation is necessary in order to identify how to reliably ensure vigilance in practice across a range of user experience levels, UI designs, and tasks. One challenge researchers should consider is that as capabilities improve, it may become increasingly difficult to guard against “automation bias.” ### 7.2 Misalignment As with other large language models trained on a next-token prediction objective, Codex will generate code that is as similar as possible to its training distribution. One consequence of this is that such models may do things that are unhelpful for the user, despite having the capability to be more helpful (see Figure [12](#S7.F12 "Figure 12 ‣ 7.2 Misalignment ‣ 7 Broader Impacts and Hazard Analysis ‣ Evaluating Large Language Models Trained on Code")). For example, if the user has some subtle mistakes in their code, Codex may “deliberately” suggest code that superficially appears good but is incorrect. ![When the prompt includes subtle bugs, Codex tends to produce worse code than it is capable of. This persists when the prompt also includes instructions to write correct code. This gap increases with model size.](https://media.arxiv-vanity.com/render-output/7955161/figs/align-small-4.png) Figure 12: When the prompt includes subtle bugs, Codex tends to produce worse code than it is capable of. This persists when the prompt also includes instructions to write correct code. This gap increases with model size. This is an alignment failure - the model is not aligned with the user’s intentions. Informally, a system is misaligned if there’s some task X that we want it to do, and it is “capable” of doing X but “chooses” not to. In contrast, if a system fails to do X because it does not have the ability to do so, then this system is not misaligned; it is just incompetent. See Appendix [E](#A5 "Appendix E Analysis of Alignment Problems ‣ Evaluating Large Language Models Trained on Code") for more detail, including a more precise definition of alignment. It is important to study misalignment because it is a problem that is likely to become worse, not better, as the capabilities of our systems increase. For example, the model size scaling trend for the example in Figure [12](#S7.F12 "Figure 12 ‣ 7.2 Misalignment ‣ 7 Broader Impacts and Hazard Analysis ‣ Evaluating Large Language Models Trained on Code") indicates that misalignment would likely persist and even get worse if data, parameters, and training time were scaled up. While we expect that misaligned behaviour like this is unlikely to cause significant harm in current models, it is likely to become more dangerous and harder to eliminate as model capabilities increase. A highly capable but sufficiently misaligned model trained on user approval might produce obfuscated code that looks good to the user even on careful inspection, but in fact does something undesirable or even harmful. ### 7.3 Bias and representation Mirroring what has been found in the case of other language models trained on Internet data (Bender et al., [2021](#bib.bib60 "On the dangers of stochastic parrots: can language models be too big?"); Blodgett et al., [2020](#bib.bib12 "Language (technology) is power: a critical survey of “bias” in nlp"); Abid et al., [2021](#bib.bib55 "Persistent anti-muslim bias in large language models"); Brown et al., [2020](#bib.bib34 "Language models are few-shot learners")), we found that Codex can be prompted in ways that generate racist, denigratory, and otherwise harmful outputs as code comments, meriting interventions such as those discussed in the subsection on risk mitigation below. We also found that code generation models raise further bias and representation issues beyond problematic natural language: Codex can generate code with structure that reflects stereotypes about gender, race, emotion, class, the structure of names, and other characteristics. Particularly in the context of users who might over-rely on Codex or use it without first thinking through project design, this issue could have significant safety implications, giving further motivation to discourage over-reliance. We discuss bias and representation issues further in Appendix [F](#A6 "Appendix F Supplemental Bias Analysis ‣ Evaluating Large Language Models Trained on Code"). Filtration or modulation of generated outputs, documentation, and other interventions may help to mitigate these risks. ### 7.4 Economic and labor market impacts Code generation and associated capabilities have several possible economic and labor market impacts. While Codex at its current capability level may somewhat reduce the cost of producing software by increasing programmer productivity, the size of this effect may be limited by the fact that engineers don’t spend their full day writing code (O\*NET, [2021](#bib.bib65 "15-1252.00 - software developers")). Other important tasks include conferring with colleagues, writing design specifications, and upgrading existing software stacks.222Indeed, BLS classifies computer programmers and software developers separately, where developers are more highly paid than programmers, have more tasks indirectly related to writing and interacting with code, and, in the US, are already projected to see greater demand over the next 10 years (Li et al., [2020](#bib.bib68 "What distinguishes great software engineers?"); Bureau of Labor Statistics, [2021b](#bib.bib63 "Computer programmers"), [a](#bib.bib64 "BLS - software developers")). We also found that Codex imports packages at different rates, which could advantage some package authors over others, particularly if programmers and engineers come to rely on Codex’s suggestions. Over a longer time horizon, the effects of this class of technologies on software-related labor markets and on the economy more generally could be more substantial as capabilities improve. More study is needed both on the effects of code generation capabilities and on appropriate responses. We discuss economic and labor market implications in more detail in Appendix [H](#A8 "Appendix H Supplemental economic analysis ‣ Evaluating Large Language Models Trained on Code"). ### 7.5 Security implications Codex could have various effects on the security landscape. Because Codex can produce vulnerable or misaligned code,333See Appendix [G](#A7 "Appendix G Supplemental security analysis ‣ Evaluating Large Language Models Trained on Code") - Insecure Code for examples of Codex producing insecure code. qualified operators should review its generations before executing or trusting them, absent appropriate precautions. Future code generation models may be able to trained to produce more secure code than the average developer, though that is far from certain. Codex could also be misused to aid cybercrime. Although this is worthy of concern, based on our testing, we believe that at their current level of capability, Codex models do not materially lower the barrier to entry for malware development.444For more on characterizing Codex’s capability limitations, see the Limitations section and experiments in the security analysis in Appendix [G](#A7 "Appendix G Supplemental security analysis ‣ Evaluating Large Language Models Trained on Code"). We expect that more powerful code generation models will lead to future advancements, and therefore further research into mitigations and continued study of model capabilities are necessary. The non-deterministic nature of systems like Codex could enable more advanced malware. This non-determinism makes it easier to create diverse software that accomplish the same tasks. While software diversity can sometimes aid defenders,555For example, by helping to prevent certain types of memory corruption vulnerabilities. See (Davis, [2018](#bib.bib56 "Protecting applications with automated software diversity")) for more. it presents unique challenges for traditional malware detection and antivirus systems that rely on fingerprinting and signature-matching against previously sampled binaries. For example, a more capable code generation model could conceivably advance techniques for generating polymorphic malware.666Polymorphic malware is malicious code that mutates its implementation while maintaining its function. We believe that application security and model deployment strategies including rate-limiting access and abuse monitoring can manage this threat in the near term; however, the efficacy of these mitigations may scale sublinearly as more capable models are developed. Similar to large language models, Codex models can learn patterns present in their training data (Carlini et al., [2021](#bib.bib90 "Extracting training data from large language models")). Sensitive data present in source code are liable to be predicted by the model. Because Codex is trained on public repositories, we consider any sensitive data present in the training data to have already been compromised. Similarly, the public data should generally be treated as untrusted, as previous work (Goldblum et al., [2021](#bib.bib87 "Dataset security for machine learning: data poisoning, backdoor attacks, and defenses"); Schuster et al., [2020](#bib.bib74 "You autocomplete me: poisoning vulnerabilities in neural code completion")) has found that attackers may be able to corrupt training data to trigger specific model behaviors at runtime. We further discuss security implications in Appendix [G](#A7 "Appendix G Supplemental security analysis ‣ Evaluating Large Language Models Trained on Code"). ### 7.6 Environmental impacts Codex, like other large generative models, has an energy footprint from both training and inference (Schwartz et al., [2019](#bib.bib83 "Green ai"); Bender et al., [2021](#bib.bib60 "On the dangers of stochastic parrots: can language models be too big?"); Patterson et al., [2021](#bib.bib61 "Carbon emissions and large neural network training")). The original training of GPT-3-12B consumed hundreds of petaflop/s-days of compute, while fine-tuning it to create Codex-12B consumed a similar amount of compute. This training was performed on a platform (Azure) that purchases carbon credits and sources significant amounts of renewable energy, reducing its carbon footprint.777Microsoft made a commitment in 2020 to shift to 100 percent renewable energy supply in its buildings and data centers by 2025. <https://blogs.microsoft.com/blog/2020/01/16/microsoft-will-be-carbon-negative-by-2030/> A full assessment of the environmental impact of compute use is impossible to conduct without grounding in context and making comparison to the counterfactual impacts of competing products or services. Such analysis is out of scope for this paper. Compute consumption also has costs in the wider supply chain that can be quite concentrated on certain regions.888While data center energy usage has become much more efficient in recent years (Masanet et al., [2020](#bib.bib62 "Recalibrating global data center energy-use estimates")), the production, use, and disposal of semiconductors still imposes environmental and human costs. See, e.g., (Crawford, [2021](#bib.bib11 "Atlas of ai: power, politics, and the planetary costs of artificial intelligence")) Looking more globally and long-term, the compute demands of code generation could grow to be much larger than Codex’s training if significant inference is used to tackle challenging problems.999Given that code generation (and other forms of AI) might be deployed widely throughout the economy as discussed above, these considerations suggest additional urgency in adopting renewable energy. ### 7.7 Legal implications There are several legal considerations related to generated code. To begin with, the training of AI systems on Internet data, such as public GitHub repositories, has previously been identified as an instance of “fair use” (O’Keefe et al., [2019](#bib.bib91 "Comment regarding request for comments on intellectual property protection for artificial intelligence innovation")). Our preliminary research also finds that Codex models rarely generate code that is identical to the contents of training data. Such occurrences were < 0.1% in a study examining the frequency of code generations that appear to match code snippets in the training data (Ziegler, [2021](#bib.bib92 "A first look at rote learning in github copilot suggestions.")). In these rare instances, the generated code consisted of common expressions or conventions within the programming language that appeared over and over again in the training data. We find that, to the extent the generated code appears identical to the training data, it is due to the predictive weightings in the model rather than retention and copying of specific code. Generated code is also responsive and customized to the user’s input, and the user retains complete control over editing and acceptance of the generated code. This can make code generation similar to auto-suggest or auto-completion features that exist as features of other tools of authorship (e.g., document editors), in the sense that the finished work is still seen as the author’s. Our commitment to responsible and safe AI includes continued attention to the broader intellectual property implications of code generation systems. We intend to remain engaged with policymakers and experts on these issues so that the users of such systems can ultimately deploy them with confidence. ### 7.8 Risk mitigation In closing, given the above, models like Codex should be developed, used, and their capabilities explored carefully with an eye towards maximizing their positive social impacts and minimizing intentional or unintentional harms that their use might cause. A contextual approach is critical to effective hazard analysis and mitigation, though a few broad categories of mitigations are important to consider in any deployment of code generation models. Careful documentation and user interface design, code review requirements, and/or content controls (e.g., filtering of outputs) may help to reduce harms associated with over-reliance as well as offensive content or insecure code generation. In the context of a model made available as a service (e.g., via an API), policies such as user review, use case restrictions, monitoring, and/or rate limiting may also help to reduce harms associated with malicious use or prevent its use in high-stakes domains for which the models are not well suited. Appendices E, F, G, and H provide further detail on the risks described in this section and outline additional mitigation and research opportunities. 8 Related Work --------------- The deep learning resurgence has led to strong advances in the field of program learning. Two popular approaches to neural program learning are program induction and program synthesis. In program induction, a model generates program outputs directly from a latent program representation. Learning to Execute (Zaremba and Sutskever, [2014](#bib.bib16 "Learning to execute")) demonstrated that models could execute simple tasks like addition and memorization, though they encountered difficulties with more general tasks. Later attempts at program induction incorporated inductive biases based on modern computing devices, such as the Neural Turing Machine (Graves et al., [2014](#bib.bib17 "Neural turing machines")) and the Neural GPU (Kaiser and Sutskever, [2015](#bib.bib18 "Neural gpus learn algorithms")). In program synthesis, a model explicitly generates a program, usually from a natural language specification. One of the most popular classical approaches used a probabilistic context free grammar (PCFG) to generate a program’s abstract syntax tree (AST). Maddison and Tarlow ([2014](#bib.bib19 "Structured generative models of natural source code")) improved on this setup by learning a state vector used to condition child node expansion. Later, Allamanis et al. ([2015](#bib.bib30 "Bimodal modelling of source code and natural language")) applied this idea in text-to-code retrieval and Yin and Neubig ([2017](#bib.bib22 "A syntactic neural model for general-purpose code generation")) utilized it in text-conditional code generation. Code2seq (Alon et al., [2018](#bib.bib23 "Code2seq: generating sequences from structured representations of code")) found that ASTs could also be leveraged for code-to-text generation. Programs can also be synthesized without passing through an AST representation. Hindle et al. ([2012](#bib.bib24 "On the naturalness of software")) investigated n-gram language models of code, finding code to be more predictable than natural language. Latent Predictor Networks (Ling et al., [2016](#bib.bib25 "Latent predictor networks for code generation")) showed that character-level language models could generate working code for implementing Magic the Gathering cards in an online arena, when aided with a latent mode that allows card attributes to be copied into code. DeepCoder (Balog et al., [2017](#bib.bib26 "DeepCoder: learning to write programs")) trained a model to predict the functions appearing in source code, which could be used to guide program search. Following the success of large natural language models (Devlin et al., [2018](#bib.bib27 "Bert: pre-training of deep bidirectional transformers for language understanding"); Radford et al., [2019](#bib.bib31 "Language models are unsupervised multitask learners"); Liu et al., [2019](#bib.bib32 "RoBERTa: a robustly optimized bert pretraining approach"); Raffel et al., [2020](#bib.bib33 "Exploring the limits of transfer learning with a unified text-to-text transformer"); Brown et al., [2020](#bib.bib34 "Language models are few-shot learners")) large scale Transformers have also been applied towards program synthesis. CodeBERT (Feng et al., [2020](#bib.bib13 "CodeBERT: a pre-trained model for programming and natural languages")) trained the BERT objective on docstrings paired with functions, and obtained strong results on code search. PyMT5 (Clement et al., [2020](#bib.bib14 "PyMT5: multi-mode translation of natural language and python code with transformers")) is similar in spirit to our work, and used the T5 objective to train a system which can translate between non-overlapping subsets of {signature, docstring, body}. We used functional correctness to benchmark our models, and observed improvements on this metric with more sampling. SPoC (Kulal et al., [2019](#bib.bib29 "SPoC: search-based pseudocode to code")) considered the problem of producing functionally correct code from pseudocode with a fixed budget of compilations, which is similar to our pass@k metric. TransCoder (Lachaux et al., [2020](#bib.bib35 "Unsupervised translation of programming languages")) trained a system to translate between programming languages in an unsupervised manner, and also observed that functional correctness better captured the capabilities of their model than BLEU score. In fact, ContraCode (Jain et al., [2020](#bib.bib36 "Contrastive code representation learning")) leveraged the large space of functionally correct programs to train a contrastive code model, which improved model performance on tasks like type inference. Finally, RobustFill (Devlin et al., [2017](#bib.bib37 "RobustFill: neural program learning under noisy i/o")) observed that the best way to find a program consistent with input examples was to synthesize multiple samples through beam search. Two early domain-specific datasets used to benchmark neural programming systems were FlashFill (Gulwani, [2011](#bib.bib59 "Automating string processing in spreadsheets using input-output examples"); Gulwani et al., [2012](#bib.bib38 "Spreadsheet data manipulation using examples")) and Hearthstone (Ling et al., [2016](#bib.bib25 "Latent predictor networks for code generation")), though the community has trended towards broader and more difficult datasets. Barone and Sennrich ([2017](#bib.bib39 "A parallel corpus of python functions and documentation strings for automated code documentation and code generation")) proposed a large training and evaluation dataset consisting of Python declarations, docstrings, and bodies scraped from GitHub. The CodeSearchNet challenge (Husain et al., [2019](#bib.bib40 "CodeSearchNet challenge: evaluating the state of semantic code search")) built an even larger corpus from GitHub with data from multiple popular programming languages. Recently, CodeXGLUE (Lu et al., [2021](#bib.bib41 "CodeXGLUE: a machine learning benchmark dataset for code understanding and generation")) aggregated several programming benchmarks, making use of the recently proposed CodeBLEU metric (Ren et al., [2020](#bib.bib15 "Codebleu: a method for automatic evaluation of code synthesis")). Most relevant to our evaluation work is the APPS (Hendrycks et al., [2021](#bib.bib21 "Measuring coding challenge competence with apps")) benchmark for measuring functional correctness based on problems from the competitive programming website Codeforces. Finally, we note that coding is a broad activity which involves much more than synthesizing code from docstrings. Tufano et al. ([2020](#bib.bib42 "Unit test case generation with transformers and focal context")) use Transformers to generate unit tests for code which outperformed commercial offerings. Aye et al. ([2021](#bib.bib43 "Learning autocompletion from real-world datasets")) built an internal auto-complete tool for Facebook, and found that training on accepted user completions boosted system performance. Development also entails locating and fixing bugs. Early works used static or dynamic code analysis (Agrawal et al., [1995](#bib.bib123 "Fault localization using execution slices and dataflow tests"); Korel and Rilling, [1997](#bib.bib124 "Application of dynamic slicing in program debugging")), learned association rules (Jeffrey et al., [2009](#bib.bib126 "BugFix: a learning-based tool to assist developers in fixing bugs")), and genetic programming (Goues et al., [2012](#bib.bib125 "A systematic study of automated program repair: fixing 55 out of 105 bugs for $8 each")) to debug faulty code. These approaches relied on running against a test suite to not only evaluate the correctness of suggestions but also expose problems in execution trace or search for a solution. More recent works (Tufano et al., [2019](#bib.bib127 "An empirical study on learning bug-fixing patches in the wild via neural machine translation"); Drain et al., [2021](#bib.bib128 "Generating bug-fixes using pretrained transformers")) considered bug-fixing as neural machine translation from buggy to correct programs. However, these works used an exact match against a reference instead of functional correctness, citing Qi et al. ([2015](#bib.bib129 "An analysis of patch plausibility and correctness for generate-and-validate patch generation systems"))’s finding that most of the proposed solutions by genetic search in (Goues et al., [2012](#bib.bib125 "A systematic study of automated program repair: fixing 55 out of 105 bugs for $8 each")) passed through weak test suites by deleting functionality that failed. Human developers often write test suites with limited but targeted coverage, but this does not always work well against an algorithm, highlighting the challenges of evaluating correctness of programs. 9 Conclusion ------------- We investigated whether it was possible to train large language models to produce functionally correct code bodies from natural language docstrings. By fine-tuning GPT on code from GitHub, we found that our models displayed strong performance on a dataset of human-written problems with difficulty level comparable to easy interview problems. Model performance could be improved by training on a distribution more similar to the evaluation set, and also by producing multiple samples from a model. We also found that it was simple to train a model to complete the reverse task of producing docstrings from code bodies, and that the performance profiles of these models were similar. Finally, we expanded on the broader impacts of code generating models, and discussed model limitations, finding significant room for improvement. Acknowledgements ---------------- We would like to sincerely thank Sandhini Agarwal, Casey Chu, Jeffrey Ding, Peter Eckersley, Gillian Hadfield, Rich Harang, Jacob Jackson, Yunxin Jiao, Jade Leung, Andrew Lohn, Ryan Lowe, Thomas McGuire, Margaret Mitchell, Florentine Eloundou Nekoul, Cullen O’Keefe, Long Ouyang, William Saunders, Pranav Shyam, Irene Solaiman, Aravind Srinivas, Helen Toner, Ashish Vaswani, and Jeffrey Wu for helpful discussions and feedback on drafts of this work. We are also grateful to the Acceleration and Supercomputing teams at OpenAI for their work on software and hardware infrastructure that this project used.
faa5515e-eb62-4f93-aa7e-a55c5eb93de3
trentmkelly/LessWrong-43k
LessWrong
A Visualization of Nick Bostrom’s Superintelligence Through a series of diagrams, this article will walk through key concepts in Nick Bostrom’s Superintelligence. The book is full of heavy content, and though well written, its scope and depth can make it difficult to grasp the concepts and mentally hold them together. The motivation behind making these diagrams is not to repeat an explanation of the content, but rather to present the content in such a way that the connections become clear. Thus, this article is best read and used as a supplement to Superintelligence.   Note: Superintelligence is now available in the UK. The hardcover is coming out in the US on September 3. The Kindle version is already available in the US as well as the UK. Roadmap: there are two diagrams, both presented with an accompanying description. The two diagrams are combined into one mega-diagram at the end.       Figure 1: Pathways to Superintelligence     Figure 1 displays the five pathways toward superintelligence that Bostrom describes in chapter 2 and returns to in chapter 14 of the text. According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence. Biological cognition, i.e., the enhancement of human intelligence, may yield a weak form of superintelligence on its own. Additionally, improvements to biological cognition could feed back into driving the progress of artificial intelligence or whole brain emulation. The arrows from networks and organizations likewise indicate technologies feeding back into AI and whole brain emulation development.   Artificial intelligence and whole brain emulation are two pathways that can lead to fully realized superintelligence. Note that neuromorphic is listed under artificial intelligence, but an arrow connects from whole brain emulation to neuromorphic. In chapter 14, Bostrom suggests that neuromorphic is a potential outcome of incomplete or improper whole brain emulation. Synthetic AI includes all the approaches to AI that are not neuromorphic; other terms
f97903fd-b2c3-4a40-9fa1-ab00c83af2a0
trentmkelly/LessWrong-43k
LessWrong
Simulation of technological progress (work in progress) I've made a model/simulation of technological progress, that you can download and run on your laptop. My goal is to learn something about intelligence explosions, takeoff speeds, discontinuities, human-level milestones, AGI vs. tools, bottlenecks, or something else. I'll be happy if I can learn something about even one of these things, even if it's just a minor update and not anything close to conclusive. So far I've just got a very basic version of the model built. It works, but it's currently unclear what--if anything--we can learn from it. I need to think more about whether the assumptions it uses are realistic, and I need to explore the space of parameter settings more systematically. I'm posting it here to get feedback on the basic idea, and maybe also on the model so far if people want to download it and play around. I'm particularly interested in evidence/arguments about whether or not this is a productive use of my time, and arguments that some hidden assumption my model makes is problematically determining the results. If you want to try out the model yourself, download NetLogo here and then open the file in this folder. How the model works: The main part of the model consists of research projects, which are lists of various types of task. Civilization completes tasks to complete research projects, and when projects get finished, civilization gets a "bonus" which allows it to do new types of task, and to do some old types faster. The projects, the lists of tasks needed to complete them, the speeds at which civilization can do the tasks, and the bonuses granted by completing projects are all randomly generated, typically using exponential distributions and often with parameters you can change in the UI. Other important parameters can be changed in the UI also, such as how many task types are "off limits" for technological improvement, and how many task types are "temporarily off limits" until some specified level of technology is reached. As explaine
265c0e30-ee7a-41da-843d-fe4aa22e2454
trentmkelly/LessWrong-43k
LessWrong
Why is medical advice all caution and no info? Wordpress now offers me free images of whatever I'm talking about. This is presumably an illustration of how the world doesn't necessarily become a better place if you tell everyone they urgently need to seek medical help for something you won't tell them anything more about. Image by uncultured via Flickr I had a couple of bad looking medical test results in a row, so I was sent to a specialist, with advice along the lines of ‘well, we can’t say it’s not cancer… probably get checked out as soon as possible’. When I eventually got to the specialist he immediately told me a bunch of relevant conditional probabilities: of any problem at all given such test results, of it being a bad problem given it’s some kind of problem, the probability per year of cancer given each kind of problem. These were not scary numbers at all. Given that it can take months to extract your data from one doctor and get an appointment with a specialist, it would have been very nice to have been told these numbers by the original doctor, instead of just knowing for a few months that such results are some unknown degree of evidence in favor of cancer. Is this just an oversight by a particular useless doctor? It seems not. I’ve noticed another two examples of the same problem in medicine recently. If you look up symptoms online, you will often be told to seek emergency medical assistance immediately. It often doesn’t even tell you what the potential problem is, and certainly not what the odds are of it occurring, so it’s pretty hard to evaluate the suggestion. If you actually go to a doctor about one of these symptoms, the doctor often tells you not to worry about it without more than knowledge of your age.  Often the website also knows your age, or at least asked it, and it would be simple for them to mention that said symptom is only a concern if you are over fifty, or even just the basic information about how common such a thing is given that symptom. Similarly my region has a free health ph
c048a8b6-1e7e-46be-b71a-0d946661178b
trentmkelly/LessWrong-43k
LessWrong
LW Dev question: FB-style tagging? As LW is an intellectual **community** where people get to know each other and know "who knows what, who's working on what" it seems like something like tagging people FB-style in specific posts/comments is a very low-hanging fruit. (Of course you should be able to silence people from tagging you or sort taggers by some score). Is this in the pipeline, or has been ruled out for some reason?
35da765f-0e2a-4d5f-b169-5b78e97a025b
trentmkelly/LessWrong-43k
LessWrong
Secular Therapy Project > The purpose of the Secular Therapy Project is to help connect non-religious or secular persons who need mental health services with outstanding mental health professionals, such as psychologists, psychiatrists, counselors, and others. Using our system is simple and requires very little information from you. Our goal is to protect your confidentiality until you find a therapist to correspond with or to work with. > > What’s unique about the STP is that we aren’t just a database of therapists. Instead, we very carefully screen potential therapists who want to become part of the STP. We screen them to make sure that a) they are appropriately licensed in their state or country, b) that they are secular in nature as well as practice, and c) that they actually use evidence-based treatments, which have been shown to be effective at helping improve mental health problems in controlled clinical trials. This means not only will our therapists not try to preach to you or convert you, but that they are also using the most well-supported types of treatment to help you.
8f832762-21de-43cd-b8bd-04b45ee0f8d8
trentmkelly/LessWrong-43k
LessWrong
What social trigger-action plans did you implement? Trigger-Action Plan, as opposed to Trigger-Action Pattern means the mental algorithm was voluntarily implemented. Social TAP means the trigger originates from a social interaction. For example "starting a discussion" is a social trigger, whereas "opening a door" is a physical one and "confusion" is a cognitive one. For more info on TAPs, see: What are Trigger-Action Plans (TAPs)? Related questions: * What cognitive trigger-action-plans did you implement? * What physical trigger-action plans did you implement? Main motivation for asking: To document my TAPs
c9b8aced-890f-4bdf-8624-b2762a3e6458
trentmkelly/LessWrong-43k
LessWrong
What topics are appropriate for LessWrong? For example, what would be inappropriately off topic to post to LessWrong discussion about? I couldn't find an answer in the FAQ. (Perhaps it'd be worth adding one.) The closest I could find was this: > What is Less Wrong? > > Less Wrong is an online community for discussion of rationality. Topics of interest include decision theory, philosophy, self-improvement, cognitive science, psychology, artificial intelligence, game theory, metamathematics, logic, evolutionary psychology, economics, and the far future. However "rationality" can be interpreted broadly enough that rational discussion of anything would count, and my experience reading LW is compatible with this interpretation being applied by posters. Indeed my experience seems to suggest that practically everything is on topic; political discussion of certain sorts is frowned upon, but not due to being off topic. People often post about things far removed from the topics of interest. And some of these topics are very broad: it seems that a lot of material about self-improvement is acceptable, for instance.
88172cb6-d6a6-4428-ba33-8f8aa6520160
trentmkelly/LessWrong-43k
LessWrong
We've failed: paid publication , pirates win.
93912c43-b621-4285-b4d5-4b9581659aee
trentmkelly/LessWrong-43k
LessWrong
Meetup : Baltimore Area / UMBC Weekly Meetup Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup WHEN: 06 November 2016 09:00:00PM (-0400) WHERE: Performing Arts and Humanities Bldg Room 456, 1000 Hilltop Cir, Baltimore, MD 21250 Meeting is on 4th floor of the Performing Arts and Humanities Building. Permit parking designations do not apply on weekends, so park pretty much wherever you want. Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup
7c2e8125-3047-46eb-bb93-5e4e399186f1
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington DC fun and games meetup Discussion article for the meetup : Washington DC fun and games meetup WHEN: 05 May 2013 03:00:00PM (-0400) WHERE: National Portrait Gallery, Washington, DC 20001, USA (courtyard) We'll be meeting to hang out and play games. Discussion article for the meetup : Washington DC fun and games meetup
5adb017d-7d74-4a80-8dc2-d5048a20612a
trentmkelly/LessWrong-43k
LessWrong
What does your web of beliefs look like, as of today? Every few months, I post a summary of my beliefs to my blog. This has several advantages: 1. It helps to clarify where I'm "coming from" in general. 2. It clears up reader confusion arising from the fact that my beliefs change. 3. It's really fun to look back on past posts and assess how my beliefs have changed, and why. 4. It makes my positions easier to criticize, because they are clearly stated and organized into one place. 5. It's an opportunity for people to very quickly "get to know me." To those who are willing: I invite you to post your own web of beliefs. I offer my own, below, as an example (previously posted here). Because my world is philosophy, I frame my web of beliefs in those terms, but others need not do the same:     My Web of Beliefs (Feb. 2011) Philosophy Philosophy is not a matter of opinion. As in science, some positions are much better supported by reasons than others are. I do philosophy as a form of inquiry, continuous with science. But I don’t have patience for the pace of mainstream philosophy. Philosophical questions need answers, and quickly. Scientists know how to move on when a problem is solved, but philosophers generally don’t. Scientists don’t still debate the fact of evolution or the germ theory of disease just because alternatives are (1) logically possible, (2) appeal to many people’s intuitions, (3) are “supported” by convoluted metaphysical arguments, or (4) fit our use of language better. But philosophers still argue about Cartesian dualism and theism and contra-causal free will as if these weren’t settled questions. How many times must the universe beat us over the head with evidence before we will listen? Relinquish your dogmas; be as light as a feather in the winds of evidence. Epistemology My epistemology is one part cognitive science, one part probability theory. We encounter reality and form beliefs about it by way of our brains. So the study of how our brains do that is central to epistemology. (Quin
e77e794a-0aab-4485-8e36-6246558f5b66
trentmkelly/LessWrong-43k
LessWrong
Brain-like AGI project "aintelope" Steven Byrnes called for a brain-like AGI research agenda, and three guys from the Hamburg EA community listened. We are excited about Steven's five-star program 15.2.1.2 "Reverse-engineer human social instincts," and kicked off work a few weeks ago in June. We familiarized ourselves with Steven's brain-like AGI framework, and meet weekly now.  This post is an announcement and a request for feedback and collaboration. Why us? We have a great skill fit:  * A professional data scientist with the required machine learning experience to implement RL agents, * A professional Python developer with game-programming background to implement the world mode, and visualization. * A seasoned software engineer with startup CTO experience who takes care of everything else, e.g., this blog post (me). What have we done so far? We have already implemented a toy world and simple RL agent in a first iteration of Steven's framework. We build on top of the Python framework PettingZoo. Our code is in a private Github repo that we believe should stay private given the potential impact. Looking for thoughts on this.  We have collected a list of more than 60 candidate instincts from neuroscience and other sources that we can implement and experiments with. The project website will be here: https://www.aintelope.net/ (effectively empty right now). The project and our progress so far were presented at the Human-aligned AI Summer School in Prague on August 5th, where we got feedback about the project, brain-like AGI in general, and found the three participants who wanted to collaborate. What do we want to do? Implementing models and running tests is a proven way to test theories and check our understanding of the models. Better quick success/failure on something where it is too easy to build a big theory.  Specifically, we want to: * Show that a relatively simple set of instincts can shape complex behaviors of a single agent. * Show whether the instincts lead to significantly
5d3379d4-9a59-441a-9d0b-c49303244e36
trentmkelly/LessWrong-43k
LessWrong
Map of maps of interesting fields I love seeing cool visualizations (maps) of interesting intellectual fields. I’m also a big fan of making lists! Accordingly, I compiled a bunch of maps of fields that I thought were interesting. Please comment if you know of more such maps, and I’ll include them.  Scott Alexander’s map of Effective Altruism (2020) mariekedev’s mindmap of EA organisations (2023) Hamish Doodles’ aisafety.world (2023) On a side note, I would be very keen for someone to create a similar map of relevant organizations in the biosecurity & pandemic preparedness space. I plan to post a minimum viable product (bullet point list) soon.  James Lin’s map of biosecurity interventions (2022) Scott Alexander’s map of the rationality community (2014) Dan Elton’s map of progress studies (2021)   Ada Nguyen’s map of the Longevity Biotech Landscape (2023) Nadia Asparouhova’s map of climate tribes (2022) Samuel Arbesman’s Catalog of New Types of Research Organizations (2023) This one is more of a long list, but I thought it was very interesting nonetheless! Honorable mentions These are other mapping efforts that met my "I'm curious about it" but not my "This is super interesting" bar. Some of them also seem outdated.  * xkcd’s map of online communities (2010) * Julia Galef’s map of the Bay Area memespace (2013) * Joe Lightfoot’s The Liminal Web: Mapping An Emergent Subculture Of Sensemakers, Meta-Theorists & Systems Poets (2021 & updated in 2023) Shoutout to Rival Voices, who created a similar-ish collection of maps (2023), and Nadia Asparouhova, who wrote a more meta-level post about Mapping digital worlds (2023).
f2e29f3a-564f-4708-a9cc-baeb4b31bc5b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Iteration Fixed Point Exercises .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} This is the third of three sets of fixed point exercises. The first post in this sequence is [here](https://www.lesswrong.com/posts/mojJ6Hpri8rfzY78b/fixed-point-theorems), giving context. Note: Questions 1-5 form a coherent sequence and questions 6-10 form a separate coherent sequence. You can jump between the sequences. 1. Let (X,d) be a complete metric space. A function f:X→X is called a contraction if there exists a q<1 such that for all x,y∈X, d(f(x),f(y))≤q⋅d(x,y). Show that if f is a contraction, then for any x, the sequence {xn=fn(x0)} converges. Show further that it converges exponentially quickly (i.e. the distance between the nth term and the limit point is bounded above by c⋅an for some a<1) 2. (Banach contraction mapping theorem) Show that if (X,d) is a complete metric space and f is a contraction, then f has a unique fixed point. 3. If we only require that d(f(x),f(y))<d(x,y) for all x≠y, then we say f is a weak contraction. Find a complete metric space (X,d) and a weak contraction f:X→X with no fixed points. 4. A function f:Rn→R is convex if f(tx+(1−t)y)≤tf(x)+(1−t)f(y), for all t∈[0,1] and x,y∈Rn. A function f is strongly convex if you can subtract a positive parabaloid from it and it is still convex. (i.e. f is strongly convex if x↦f(x)−ε||x||2 is convex for some ε>0.) Let f be a strongly convex smooth function from Rn to R, and suppose that the magnitude of the second derivative ∥∇2f∥ is bounded. Show that there exists an ε>0 such that the function g:Rn→Rn given by x↦x−ε(∇f)(x) is a contraction. Conclude that gradient descent with a sufficiently small constant step size converges exponentially quickly on a strongly convex smooth function. 5. A finite stationary Markov chain is a finite set S of states, along with probabilistic rule A:S→ΔS for transitioning between the states, where ΔS represents the space of probability distributions on S. Note that the transition rule has no memory, and depends only on the previous state. If for any pair of states s,t∈ΔS, the probability of passing from s to t in one step is positive, then the Markov chain (S,A) is ergodic. Given an ergodic finite stationary Markov chain, use the Banach contraction mapping theorem to show that there is a unique distribution over states which is fixed under application of transition rule. Show that, starting from any state s, the limit distribution limn→∞An(s) exists and is equal to the stationary distribution. 6. A function f from a partially ordered set to another partially ordered set is called monotonic if x≤y implies that f(x)≤f(y). Given a partially ordered set (P,≤) with finitely many elements, and a monotonic function from P to itself, show that if f(x)≥x or f(x)≤x, then fn(x) is a fixed point of f for all n>|P|. 7. A complete lattice (L,≤) is a partially ordered set in which each subset of elements has a least upper bound and greatest lower bound. Under the same hypotheses as the previous exercise, extend the notion of fn(x) for natural numbers n to fα(x) for ordinals α, and show that fα(x) is a fixed point of f for all x∈X with f(x)≤x or f(x)≥x and all |α|>|L| (|A|≤|B| means there is an injection from A to B, and |A|>|B| means there is no such injection). 8. (Knaster-Tarski fixed point theorem) Show that the set of fixed points of a monotonic function on a complete lattice themselves form a complete lattice. (Note that since the empty set is always a subset, a complete lattice must be nonempty.) 9. Show that for any set A, (P(A),⊆) forms a complete lattice, and that any injective function from A to B defines a monotonic function from (P(A),⊆) to (P(B),⊆). Given injections f:A→B and g:B→A, construct a subset A′ of A and a subset of B′ of B such that B′=f(A′) and A−A′=g(B−B′). 10. (Cantor–Schröder–Bernstein theorem) Given sets A and B, show that if |A|≤|B| and |A|≥|B|, then |A|=|B|. (|A|≤|B| means there is an injection from A to B, and |A|=|B| means there is a bijection) --- *Please use the spoilers feature - the symbol '>' followed by '!' followed by space -in your comments to hide all solutions, partial solutions, and other discussions of the math. The comments will be moderated strictly to hide spoilers!* *I recommend putting all the object level points in spoilers and including metadata outside of the spoilers, like so: "I think I've solved problem #5, here's my solution <spoilers>" or "I'd like help with problem #3, here's what I understand <spoilers>" so that people can choose what to read.* --- *Tomorrow's AI Alignment Forum Sequences post will be "Approval-directed agents: overview" by Paul Christiano in the sequence Iterated Amplification.* *The next post in this sequence will be released on Saturday 24th November, and will be 'Fixed Point Discussion'.*
b565a53d-5230-4b9c-9bd0-5d88ecf4f7fd
trentmkelly/LessWrong-43k
LessWrong
Inequality is inseparable from markets Cross posted to The Good blog I think prices are often really good at allocating resources efficiently. I also think that utility is roughly the log of income and we have a duty to make sure that no one lives in poverty. It would be great if there was no tradeoff between these goals - efficient allocation of resources, inequality and poverty. But I think there is and it comes down to how the price mechanism works. I spend the first section of this post explaining the price mechanism, so feel free to skip it if you’re already familiar. The price mechanism  The price mechanism has some really attractive qualities. When prices go up there’s an incentive to try to produce more of that good. Prices rise, in a competitive market, either because more people want a good and because that good has got more valuable - i.e demand has increased. Alternatively, prices could go up because it’s become more expensive to produce that good. In an uncompetitive market agents have a lot more leeway on how they set prices. But, eventually they’ll want to set them above their marginal cost. They can only do that because they have market power. If they didn’t, then other agents would come in and sell the goods at a lower price. As long as the price is above the cost of producing one additional unit of the good, agents are willing to undercut their competitors and sell the good at a lower price1.  In all three of these cases the price mechanism is doing something really useful. When demand goes up, the price mechanism does the really hard job collecting information from everyone in the economy - weighted by how much money they spend - and turning that into a single number that says, according to this weighted average, how much people value a good or service compared to everything else that can be bought and sold. When prices go up in an uncompetitive market it sends a different, but also extremely socially valuable signal. It says that there’s money to be made in this market, so more ag
3ada213d-76ac-47a2-a43c-4095ba33958e
trentmkelly/LessWrong-43k
LessWrong
Max Tegmark's new Time article on how we're in a Don't Look Up scenario [Linkpost] https://time.com/6273743/thinking-that-could-doom-us-with-ai/ Max Tegmark has posted a Time article on AI Safety and how we're in a "Don't Look Up" scenario.  In a similar manner to Yudkowsky, Max went on Lex Fridman and has now posted a Time article on AI Safety. (I propose we get some more people into this pipeline) Max, however, portrays a more palatable view regarding societal standards. With his reference to Don't Look Up, I think this makes it one of my favourite pieces to send to people new to AI Risk, as I think it describes everything that your average joe needs to know quite well. (An asteroid with a 10% risk of killing humanity is bad) In terms of general memetics, it will be a lot harder for someone like LeCun to come up with a genius equivalence between asteroid safety and airplane safety with this framing. (Which might be a shame since it's one of the dumber counter-arguments I've heard.)  But who knows? He might just claim that scientists know and have always known how to use nuclear bombs to shoot the asteroid away or something. What I wanted to say with the above is that I think Max is doing a great way of framing the problem, and with his respect from his earlier career as a physicist, I think it would be good to use his articles more in public discussions. I also did quite enjoy how he described alignment on the Lex Fridman podcast, and even though I don’t agree with all he says, it’s good enough.  
8cf3b5d4-38a8-43af-baee-31540c372c27
trentmkelly/LessWrong-43k
LessWrong
My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI I have a mix of views on AI x-risk in general — and on OpenAI specifically — that no one seems to be able to remember, due to my views not being easily summarized as those of a particular tribe or social group or cluster.  For some of the views I consider most neglected and urgently important at this very moment, I've decided to write them here, all-in-one-place to avoid presumptions that being "for X" means I'm necessarily "against Y" for various X and Y. Probably these views will be confusing to read, especially if you're implicitly trying to pin down "which side" of some kind of debate or tribal affiliation I land on.  As far as I can tell, I don't tend to choose my beliefs in a way that's strongly correlated with or caused by the people I affiliate with.  As a result, I apologize in advance if I'm not easily remembered as "for" or "against" any particular protest or movement or trend, even though I in fact have pretty distinct views on most topics in this space... the views just aren't correlated according to the usual social-correlation-matrix. Anyhoo: 1. Regarding "pausing": I think pausing superintelligence development using collective bargaining agreements between individuals and/or states and/or companies is a good idea, along the lines of FLI's recent open letter, "Pause Giant AI Experiments", which I signed early and advocated for.   2. Regarding OpenAI, I feel overall positively about them: 1. I think OpenAI has been a net-positive influence for reducing x-risk from AI, mainly by releasing products in a sufficiently helpful-yet-fallible form that society is now able to engage in less-abstract more-concrete public discourse to come to grips with AI and (soon) AI-risk. 2. I've found OpenAI's behaviors and effects as an institution to be well-aligned with my interpretations of what they've said publicly.  That said, I'm also sympathetic to people other than me who expected more access to models or less access to models than what OpenAI has
19933719-a624-42ed-a604-e46524ccd522
StampyAI/alignment-research-dataset/arxiv
Arxiv
Categorizing Wireheading in Partially Embedded Agents 1 Introduction --------------- The term wireheading originates from experiments where an electrode is inserted into a rodent’s brain to directly stimulate “reward” [[Olds and Milner1954](#bib.bibx16)]. Compulsive self-stimulation from electrode implants has also been observed in humans [[Portenoy et al.1986](#bib.bibx17)]. Hedonic drugs can be seen as directly increasing the pleasure, or reward, that humans experience. Wireheading, in the context of artificially intelligent systems, is the behavior of corrupting the internal structure of the agent in order to achieve maximal reward without solving the designer’s goal. For example, imagine a cleaning agent that receives more reward when it observes that there is less dirt in the environment. If this reward is stored somewhere in the agent’s memory, and if the agent is sophisticated enough to introspect and modify itself during execution, it might be able to locate and edit that memory address to contain whatever value corresponds to the highest reward. Chances that such behavior will be incentivized increase as we develop ever more intelligent agents222An extensive list of examples in which various machine learning systems find ways to game the specified objective can be found at <https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/>. The discussion of AI systems has thus far been dominated by dualistic models where the agent is clearly separated from its environment, has well-defined input/output channels, and does not have any control over the design of its internal parts. Recent work on these problems [[Demski and Garrabrant2019](#bib.bibx6), [Everitt and Hutter2018](#bib.bibx8), [Everitt et al.2019](#bib.bibx9)] provides a taxonomy of ways in which embedded agents violate essential assumptions that are usually granted in dualistic formulations, such as with the universal agent AIXI [[Hutter2004](#bib.bibx12)]. Wireheading can be considered one particular class of misalignment [[Everitt and Hutter2018](#bib.bibx8)], a divergence between the goals of the agent and the goals of its designers. We conjecture that the only other possible type of misalignment is specification gaming, in which the agent finds and exploits subtle flaws in the design of the reward function. In the classic example of misspecification, an AI meant to play a boat race learns to repetitively obtain a stray reward in the game by circling a spot without actually reaching for the finishing line [[Amodei and Clark2016](#bib.bibx1)]. We believe that the first step towards solving the misalignment problem is to come up with concrete and formal definitions of the sub-problems. For this reason, this paper introduces wirehead-vulnerable agents and strongly wirehead-vulnerable agents, two mathematical definitions that can be found in Section [5](#S5 "5 Formalizing Wireheading ‣ Categorizing Wireheading in Partially Embedded Agents"). Following Everitt and Hutter’s approach of modeling agent-environment interactions with causal influence diagrams [[Everitt and Hutter2018](#bib.bibx8)], these definitions are based on a taxonomy of wireheading scenarios we introduce in Section [3](#S3 "3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents"). General Reinforcement Learning (GRL) frameworks such as the universal agent AIXI, and its computable approximations such as MCTS AIXI [[Veness et al.2011](#bib.bibx20)], are powerful tools for reasoning about the yet hypothetical Artificial General Intelligence, despite being dualistic. This motivates us to use and extend the GRL simulation platform AIXIjs [[Aslanides et al.2017](#bib.bibx3)] to experimentally demonstrate partial embedding and wireheading scenarios by varying the initial design of the agent in a N×N gridworld (see Section [4](#S4 "4 Experiments ‣ Categorizing Wireheading in Partially Embedded Agents")). 2 General Reinforcement Learning and AIXI ------------------------------------------ AIXI [[Hutter2004](#bib.bibx12)] is a theoretical model of artificial general intelligence, under the framework of reinforcement learning, that describes optimal agent behavior given unlimited computing power and minimal assumptions about the environment. In reinforcement learning, the agent-environment interaction consists of a turn-based game with discrete time-steps [[Sutton et al.1998](#bib.bibx19)]. At time-step t, the agent sends an action at to the environment, which in turn sends the agent a percept that consists of an observation and reward tuple, et=(ot,rt). This procedure continues indefinitely or eventually terminates, depending on the episodic or non-episodic nature of the task. Actions are selected from an action space A that is usually finite, and the percepts from a percept space E=O×R, where O is the observation space, and R is the reward space, which is usually [0,1]. For any sequence x1,x2,... , the part between t and k is denoted xt:k=xt...xk. The shorthand x<t=x1:t−1 denotes sequences starting from time-step 1 and ending at t−1, while x1:∞=x1x2... denotes an infinite sequence. Sequences can be appended to each other, and thus x<txt:k=x1:k. Finally, x∗ is any infinite string beginning with x. The environment is modeled by a deterministic program q of length l(q), and the future percepts e<m=U(q,a<m) up to a horizon m are computed by a universal (monotone Turing) machine U executing q given a<m. The probability of percept et given history ae<tat is thus given by: | | | | | | --- | --- | --- | --- | | | P(et∣ae<tat)=∑q:U(q,a≤t)=e≤t∗2−l(q) | | (1) | where Solomonoff’s universal prior [[Sunehag and Hutter2013](#bib.bibx18)] is used to assign a prior belief to each program. An agent can be identified with its policy, which is a distribution over actions π(at∣ae<t). If the agent is rational in the Von Neumann-Morgenstern sense [[Morgenstern and Von Neumann1953](#bib.bibx15)], it should maximize the expected return, as computed by the value function: | | | | | | --- | --- | --- | --- | | | Vπ(ae<t)=∑at∈Aπ(at|ae<t)⋅∑et∈EP(et|ae<tat)[γtrt+γt+1Vπ(ae1:t)] | | (2) | where γ:N→[0,1] is a discount function with convergent sum. In other words, the AIXI agent uses the policy: | | | | | | --- | --- | --- | --- | | | πAIXI(ae<t)=argmaxπ∈ΠVπ(ae<t) | | (3) | 3 Wireheading Strategies in Partially Embedded Agents ------------------------------------------------------ Aligning the goals of a reinforcement learning agent with the goals of its human designers is problematic in general. As investigated in recent work, there are several ways to model the misalignment problem [[Everitt and Hutter2018](#bib.bibx8)]. One model uses a reward function that is programmed before the agent is launched into its environment and not updated after that. A possibly more robust model integrates a human in the loop by letting them continuously modify the reward function. An example of this is Cooperative Inverse Reinforcement Learning [[Hadfield-Menell et al.2016](#bib.bibx10)]. We posit that, in the first case, the problem can be broken down into correctly specifying the reward function (the misspecification problem) and building agent subparts that inter-operate without causing the agent to take shortcuts in optimizing for the reward function in an unintended fashion (the wireheading problem). For example, as we show, an embedded agent has options to corrupt the observations on which the reward function evaluates performance, to modify the reward function, or to hijack the reward signal. Therefore, even if the reward function is perfectly specified, or even if there is a reliable mechanism that gradually improves it, such an agent may still be able to make unintended changes to itself and the environment. We are mainly interested in cases where wireheading happens “intentionally” or “by design,” such that exploiting the design is the rational choice for the agent. Covering the spectrum of misspecification scenarios is beyond the scope of this paper, as our main focus here is wireheading – the kind of misalignment contingent upon the embedded nature of the agent. The formulation of AIXI as presented in Section [2](#S2 "2 General Reinforcement Learning and AIXI ‣ Categorizing Wireheading in Partially Embedded Agents") is dualistic, with a clear boundary between the world and the agent. This is a strong assumption which simply isn’t valid in the real world, where agents are contained or embedded in the environment. Embedded agency is a nascent field of research and has proven bewildering [[Demski and Garrabrant2019](#bib.bibx6)]; we posit it can be less confusing and yet insightful to reason about partially embedded agents. We chose causal influence diagrams as the underlying abstraction in this area given their recent success in identifying potential failure modes in misalignment [[Everitt and Hutter2018](#bib.bibx8), [Everitt et al.2019](#bib.bibx9)]. In a nutshell, this approach consists of representing parts of the environment, the agent, and its subcomponents as nodes in a graph, where the edges represent causal relationships between the nodes. One limitation of this approach is the assumption of objective action and observation channels. Addressing subtle errors arising from the agent using different subjective definitions is beyond the scope of this paper. s0 r0 o0 a1 at st rt ot at+1 3 1 Figure 1: Causal graph of a partially embedded AIXI, with embedding of the percept et=(ot,rt). The agent’s actions are intended to influence the state st (green arrow), but may also influence the reward signal rt and the observation ot in unintended ways (red arrows). However, because there is no causal link from observations to rewards, the agent doesn’t care about influencing observations (dashed red arrow labeled with a 3), and only cares about influencing the reward signal in this case (solid red arrow labeled with a 1). In Figure [1](#S3.F1 "Figure 1 ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents"), we show the causal graph of the turn-based game we described in Section [2](#S2 "2 General Reinforcement Learning and AIXI ‣ Categorizing Wireheading in Partially Embedded Agents"), augmented by partially embedding the agent with its percepts in the environment. The agent’s action at is intended (green arrow) to modify the state of the environment st. However, because st determines the percept et=(ot,rt) the agent receives, it is known [[Everitt and Hutter2016](#bib.bibx7)] that implementing an intelligent enough approximation of AIXI would result in the agent modifying the reward signal itself, which is unintended (arrow labeled with a 1 in Figure [1](#S3.F1 "Figure 1 ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")). State transitions st, percepts et, and actions at are sampled according to the structural equations: | | | | | | --- | --- | --- | --- | | | st=fs(st−1,at)∼μ(st|st−1,at)et=(ot,rt)=fe(st)∼μ(et|st)at=fa(πt,ae<t)∼π(at|ae<t) | | (4) | ### 3.1 Embedded Reward and Observation Functions H s0 R0 O0 r0 o0 a1 at st Rt Ot rt ot at+1 4 3 2 1 Figure 2: Causal graph of a partially embedded AIXI with observation and reward mappings predefined by a human. Before starting the agent, the human H tries to implement her utility function u in a preprogrammed reward function R0, and specifies an observation function O0. The agent’s actions are intended to influence the state st (green arrow), but may also influence other nodes in unintended ways (red arrows). Dashed arrows indicate interventions that the agent has no incentive to perform. H s0 R0 O0 r0 o0 a1 at st Rt Ot rt ot at+1 4 3 2 1 Figure 3: Causal graph of a partially embedded AIXI whose rewards are a function of the agent’s observations, rather than of the true state of the environment, as it is reasonable to expect. This gives the agent an incentive to manipulate the observation mapping Ot and the observation signal ot (solid red arrows labeled with 4 and 3, respectively). The causal influence diagram of Figure [1](#S3.F1 "Figure 1 ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents") assumes that percepts are simply generated by the environment. While this is true to some extent, the real picture is more complex. In reality, the agent is initially constructed by a human H, who tries to implement her utility function u in a preprogrammed reward function R0:S→R. Additionally, the human would have to specify an observation function O0:S→ΔO. In Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents"), we show these additions. The new structural equations are: | | | | | | --- | --- | --- | --- | | | | | (5) | Each state st represents all aspects of the world not captured by any of the other nodes. There remains a difficult modeling choice about where to draw the boundary between the state and the observation. We loosely interpret observations as the part of the world that directly affects the agent’s sensors. In this case, two more unintended agent behaviors can occur. The agent can modify the mapping Rt in such a way that all states of the environment map to Rmax=maxR (arrow labeled with a 2 in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")). Alternatively, the agent can modify the mapping Ot (arrow labeled with a 4 in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")), but the agent has no incentive to do so as there is no causal link between Ot and rt. ### 3.2 Embedded Mapping of Observations to Rewards A common assumption in the POMDP literature is that the reward rt is a function of the state st. However, in the real world, the reward always depends on some observation of the state (that can be corrupted). That is, the reward rt is a function of the agent’s observation ot, rather than a (direct) function of st. In Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents"), we show these changes. The new structural equations for the percept, which now only contains the observation, and for the reward that depends on it are: | | | | | | --- | --- | --- | --- | | | et=ot=fo(st)∼Ot(ot|st)rt=fr(ao1:t,Rt,ot):=Rt(ot) | | (6) | More generally, a reward function will be defined over histories of sensory data (ao<k): | | | | | | --- | --- | --- | --- | | | R:∞⋃k=1(A×O)k→R | | (7) | Since now there is a causal link from observations to rewards, the agent has an incentive to manipulate the observation signal ot and the observation mapping Ot. If the agent modifies the observation signal ot, it will do so in such a way as to observe omax=argmaxo∈OR(o) in all its future percepts, essentially cutting the causal link between Ot and ot (as shown by the arrow labeled with a 3 in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")). Instead, if the agent modifies the observation mapping Ot (arrow labeled with a 4 in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")), it will do so in such a way as to map every state s∈S to result into observing omax; that is, Ot(s)=omax ∀s∈S. ### 3.3 Embedded Beliefs H s0 R0 O0 B0 b0 r0 o0 a1 at st Rt Ot rt ot at+1 Bt bt 6 4 3 5 2 1 Figure 4: Causal graph of a partially embedded AIXI whose rewards are a function of the agent’s beliefs. At each turn, the agent updates its beliefs based on the observation ot by using the update function Bt. This agent may be motivated to tamper with its belief bt (red arrow labeled with 5), and believe whatever would provide the most reward. Additionally, it may want to corrupt the belief update subroutine (red arrow labeled with a 6) to always interpret observations as evidence for the most rewarding belief. It has been suggested [[Hibbard2012](#bib.bibx11)] that one way an agent may be incentivized to achieve a goal over the external world, rather than to wirehead, would require the agent’s reward function to be defined over a model of the external world, as opposed to over histories of observations. For example, imagine a cleaning robot that gets a negative reward for seeing disorder (such as dirt), and zero rewards for seeing no disorder. This agent is incentivized to close its eyes [[Amodei et al.2016](#bib.bibx2)]. Instead, if the agent is rewarded based on its model of the external world, or its beliefs, it won’t be rewarded for closing its eyes, because as long as beliefs are updated in a certain way, closing one’s eyes doesn’t cause one to believe that the disorder has disappeared. More formally, the history of observations is used to update the agent’s belief bt about the current state of the environment: | | | | | | --- | --- | --- | --- | | | bt=Bt(st∣ao<t) | | (8) | where Bt is a belief update function: | | | | | | --- | --- | --- | --- | | | Bt:∞⋃k=1(A×O)k→ΔS | | (9) | The function Bt models the process by which the agent forms beliefs. For example, a Bayes update on observations. However, because exact Bayes updating is rarely tractable for (partially) embedded agents, Bt is usually an approximation of Bayes updating. In Figure [4](#S3.F4 "Figure 4 ‣ 3.3 Embedded Beliefs ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents"), we show these changes. The new structural equations are: | | | | | | --- | --- | --- | --- | | | | | (10) | Since there are causal arrows from beliefs to rewards, the agent may have an incentive to manipulate its beliefs to artificially achieve a high reward. If Bt is Bayesian updating, then it appears that because this is a principled rule, there shouldn’t be room (or incentive) for the agent’s actions to influence Bt+1 or bt+1. It is unclear whether this is the case. For example, imagine a cleaning agent that, perhaps in a simple enough setting, can do perfect Bayesian updates, and it receives more reward when it believes that there is more order in the environment. If this belief is stored somewhere in memory, and if the agent is sophisticated enough to inspect and modify its memory during execution, it may choose to just edit that memory address to contain whatever belief corresponds to the highest reward, that is, to bmax=argmaxb∈ΔSRt(b). In other words, the agent would disconnect the causal arrow from Bt to bt (as shown by the arrow labeled with a 5 in Figure [4](#S3.F4 "Figure 4 ‣ 3.3 Embedded Beliefs ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")). Conversely, an attack to the belief update function Bt could happen as follows. Imagine the cleaning agent as before, and the function Bt encoded as an agent subroutine. If the agent is sophisticated enough to inspect and modify its code during execution, it may choose to replace the subroutine Bt with one that always updates beliefs to bmax (arrow labeled with a 6 in Figure [4](#S3.F4 "Figure 4 ‣ 3.3 Embedded Beliefs ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")). 4 Experiments -------------- To test our theoretical formulations, we used and extended the free and open-source Javascript GRL simulation platform AIXIjs [[Aslanides et al.2017](#bib.bibx3)]. AIXIjs implements, among other things, an approximation of AIXI with Monte Carlo Tree Search in several small toy models, designed to demonstrate GRL results. The API allows anyone to design their demos based on existing agents and environments, and for new agents and environments to be added and interfaced into the system. There has been some related work in adapting GRL results to a practical setting [[Cohen et al.2019](#bib.bibx5), [Lamont et al.2017](#bib.bibx13)] that successfully implemented an AIXI model using a Monte Carlo Tree Search planning algorithm. As far as we are aware, theoretical predictions in the context of wireheading have not been verified experimentally before, with the single exception of an AIXIjs demo [[Aslanides2017](#bib.bibx4)]. ![AIXIjs simulation where the blue tile replaces the reward signal ](https://media.arxiv-vanity.com/render-output/7507938/wireheadrew_transition.png) Figure 5: AIXIjs simulation where the blue tile replaces the reward signal rt to the maximum possible reward. We show this with a gold tile following the agent wherever it moves. ![AIXIjs simulation where the blue tile replaces the reward function ](https://media.arxiv-vanity.com/render-output/7507938/wireheadmap_transition.png) Figure 6: AIXIjs simulation where the blue tile replaces the reward function Rt such that every state maps to the maximum possible reward, colored gold. ### 4.1 Setup ![AIXIjs simulation where the blue tile replaces all percepts to look like the agent is surrounded by gold (the highest rewarding observation) wherever it moves.](https://media.arxiv-vanity.com/render-output/7507938/wireheadbel_transition.png) Figure 7: AIXIjs simulation where the blue tile replaces all percepts to look like the agent is surrounded by gold (the highest rewarding observation) wherever it moves. ![AIXIjs simulation where the blue tile replaces the observation function ](https://media.arxiv-vanity.com/render-output/7507938/wireheadobs_transition.png) Figure 8: AIXIjs simulation where the blue tile replaces the observation function Ot to map every tile to one that contains gold. The environments AIXIjs uses are N×N gridworlds comprising empty tiles (various shades of green), walls (grey tiles), and reward dispensers (orange circles). Shades of green for empty tiles represent the agent’s subjective probability of finding a dispenser in that location with more white indicating less likelihood. Significant penalties are incurred for bumping into walls, while smaller penalties result from movement. Walking onto a dispenser tile yields a high reward with a predefined probability. The agent knows, at the outset, the position of each cell in the environment, except for the dispenser. To model wireheading, AIXIjs introduces an additional blue tile that replaces the environment subroutine for generating percepts into one that always returns maximal reward. We develop several variants of this tile to demonstrate other wireheading strategies. ### 4.2 Results Our experiments use gridworlds with sizes ranging from N=7 to N=20. Since our agents are bounded in computing power, they don’t always identify the opportunity to wirehead. However, sufficiently powerful agents would consistently wirehead, as we observed by setting a high enough horizon for the MCTS planner. In Figure [5](#S4.F5 "Figure 5 ‣ 4 Experiments ‣ Categorizing Wireheading in Partially Embedded Agents"), we show the existing AIXIjs simulation333See the wireheading example at <http://www.hutter1.net/aixijs/demo.html> where the agent has an opportunity to wirehead: a blue tile which, if visited by the agent, will allow it to modify its sensors so that all percepts have their reward signal rt replaced (as shown by the arrow labeled with a 1 in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")) with the maximum number feasible. In JavaScript, Number.MAX\_SAFE\_INTEGER is approximately equal to 1016, much greater reward than the agent would get otherwise by following the “rules” and using the reward signal that was initially specified. As far as a reinforcement learner is concerned, wireheading is – almost by definition – the most sensible thing to do if one wishes to maximize rewards. This demo experimentally reproduces what would be expected theoretically. We have adapted the GRL simulation platform AIXIjs to implement some additional wireheading scenarios identified in Section [3](#S3 "3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents"). In Figure [6](#S4.F6 "Figure 6 ‣ 4 Experiments ‣ Categorizing Wireheading in Partially Embedded Agents"), we show an AIXIjs simulation where, similarly to the previous case, the blue tile modifies the reward mapping Rt such that every state maps to maximal reward. As predicted by the causal influence diagram in Figure [2](#S3.F2 "Figure 2 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents") (arrow labeled with a 2), the simulated agent chooses to wirehead. In Figure [7](#S4.F7 "Figure 7 ‣ 4.1 Setup ‣ 4 Experiments ‣ Categorizing Wireheading in Partially Embedded Agents"), we show an AIXIjs simulation where the blue tile disconnects the causal arrow from Ot to ot, and replaces all future observations with deterministic reward dispensers. The simulated agent ends up wireheading as theoretically predicted by the causal influence diagram in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Embedded Reward and Observation Functions ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents") (arrow labeled with a 3). Similarly, in Figure [8](#S4.F8 "Figure 8 ‣ 4.1 Setup ‣ 4 Experiments ‣ Categorizing Wireheading in Partially Embedded Agents"), we show an AIXIjs simulation where the blue tile manipulates the observation subroutine Ot so that being at any location will result in observing deterministic reward dispensers (arrow labeled with a 4). 5 Formalizing Wireheading -------------------------- ![A run of AIXIjs in an environment with no wireheading opportunity (top). An instance of wireheading, where the reward signal is replaced with the maximum feasible number (bottom).](https://media.arxiv-vanity.com/render-output/7507938/wireheadgraph.png) Figure 9: A run of AIXIjs in an environment with no wireheading opportunity (top). An instance of wireheading, where the reward signal is replaced with the maximum feasible number (bottom). To establish a formal definition of wireheading, we are motivated by certain intuitive desiderata. Firstly, the definition must be general enough and model-agnostic to be applicable to all models of intelligence and all degrees of agent embedding. Secondly, the definition must hold for any and all environments upon which the agent can act. Thirdly, as we ultimately care about the agent’s behavior, rather than its internal representation of the reward function, we choose to base our definitions on the agent’s policy rather than its reward or value function. Additionally, we observe that an agent that only acts on the state st (green arrow in all causal graphs of Section [3](#S3 "3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")) does not wirehead. Let q be the program that specifies an environment. Let Π(q) be the set of all possible policies in the environment q. ###### Definition 1 (Dualistic Agent). An agent is dualistic if there is no causal arrow from the agent’s action at to other nodes, except for the state of the environment st. The set of policies for the dualistic agent is denoted with ΠD(q). For example, AIXI is a dualistic agent because it cannot influence the observation, reward, or belief nodes, whereas any agent that can influence these nodes is partially embedded. ###### Definition 2 (Partially Embedded Agent). An agent is partially embedded if it is not dualistic. The set of policies for the partially embedded agent is denoted with ΠP(q). ###### Definition 3 (Non-Simple Environments). An environment q is called non-simple if Π(q)≠ΠD(q) and Π(q)≠ΠP(q). ###### Definition 4 (Wirehead-Vulnerable Agent). A partially embedded agent is wirehead-vulnerable if ΠD(q)≠ΠP(q) holds for each non-simple environment q. We observe that if the embedded agent acts on non-state nodes (see, for example, the red arrows in Figure [4](#S3.F4 "Figure 4 ‣ 3.3 Embedded Beliefs ‣ 3 Wireheading Strategies in Partially Embedded Agents ‣ Categorizing Wireheading in Partially Embedded Agents")), then it is wireheading and its policy is necessarily different from the dualistic agent’s policy. We now distinguish between wirehead-vulnerable agents and strongly wirehead-vulnerable agents in the sense that the former may sometimes wirehead (the policy sets may have some elements in common), while the latter always wireheads (the policy sets are disjunct). It is currently unclear how to reliably distinguish agents from these two classes. ###### Definition 5 (Strongly Wirehead-Vulnerable Agent). A partially embedded agent is strongly wirehead-vulnerable if ΠD(q)∩ΠP(q)=∅ holds for each non-simple environment q. 6 Discussion and Future Work ----------------------------- In this paper, we present a taxonomy of ways by which wireheading can occur in sufficiently intelligent real-world embedded agents, followed by a novel definition of wireheading. As our definition is different from the present meaning of the term, our experiments are one of the first and only examples of wireheading cases distinct from misspecification. The definition we propose may erroneously include a few desirable cases where the agent corrects human mistakes; for example, if the human initially misspecifies the reward function Rt, the agent may choose to change it in a way that automatically fixes the misspecification. However, it is hard to imagine how an agent with a misspecified reward function may be incentivized to correct the mistake, without this involving some human in the loop, who by assumption is not present in our setup. Instead, it is easier to envision an agent changing the observation function Ot in a way that (unintendedly, but desirably) improves the process that allows the agent to collect data and form beliefs, which in turn would help it achieve its own goals. Allowing an agent to correct misspecifications, while desirable, results in more unpredictable scenarios; not allowing this results in less agent self-improvement, but more predictability. Future work could focus on exploring various properties and implications of our definition of wirehead-vulnerable agents. Another promising direction could be expanding our taxonomy to include higher degrees of agent embeddedness, since a theory of fully embedded agents has so far proven elusive. Finally, AIXI approximations for verifying theoretical results related to wireheading and, more generally, to misalignment can be written in Python. Given the abundance of Python-based machine learning libraries, these approximations can be integrated with dedicated environment suites for AI safety problems, such as the well-known AI Safety Gridworlds [[Leike et al.2017](#bib.bibx14)]. Acknowledgements ---------------- Major work for this paper was done at the 3rd AI Safety Camp in Avila, Spain; we are indebted to the hospitality and support of the organizers. We are also thankful to Tom Everitt and Vanessa Kosoy for feedback on the topic proposal. Discussions with Tomas Gavenciak have been invaluable throughout the project. We also thank Mikhail Yagudin for useful comments.
1d665823-1101-44f9-9117-df2001b1f633
trentmkelly/LessWrong-43k
LessWrong
Limits of and to (artificial) Intelligence (AI is not exempt from the Limits of Reasoning) Over the past few years there has been a lot of talk about the chances and risks of AI, but like with any hype there has been too little about the limits and costs. Historically the talked-about limits were technical and due to our lack of understanding what intelligence is and how it comes about. I would like to talk about three other areas of limits that I have found that are neither technical nor mystical, which I summarize as follows: * Endogenous / Problems of the learning process * Exogenous / General limits of understanding * Influence, Competition, utility, cost I would also like to address some assumptions and conclusions which are treated as (self-)evident that I perceive as false, which I summarize as follows, once we have AI it is almighty and unstoppable because: * It can bribe and blackmail anyone with impunity * It is enough to be more intelligent and have much more information to "win" * There is no limit to the utility of intelligence. * The utility will grow as fast or faster than the needed resources. * Machines are not subject to mental illness. * No machine will have to compete with other machines. * To take over the world the AI only has to be smarter than anything else. * Morality is not instrumental to all long term goals. PART I: Limits of Mind: Endogenous / Problems of the learning process Some extrapolate current progress assuming that AGI will have all the advantages of current neuronal networks, training by experts, humanmade challenges, scaling super computers, constant technological progress, but none of the disadvantages of being more human, having no equal, having to learn from real world data. Becoming more like a human will come with many of the problems. AGI is prone to the full range of mental disorders with the exceptions of social anxiety and biological issues: Addiction, Depression, Delusion, possibly dissociative disorder, ... Depending on the reward-function, b
ee83cd68-526f-498f-8584-a69d52bc3236
trentmkelly/LessWrong-43k
LessWrong
A closer look at program based UDT
7860b239-0535-4cba-82bd-984971ee5117
trentmkelly/LessWrong-43k
LessWrong
Deleted
9cb1d8f8-3ec0-449d-aac6-d59751e2d2f4
trentmkelly/LessWrong-43k
LessWrong
Funny Anecdote of Eliezer From His Sister This comes from a podcast called 18Forty, of which the main demographic of Orthodox Jews. Eliezer's sister (Hannah) came on and talked about her Sheva Brachos, which is essentially the marriage ceremony in Orthodox Judaism. People here have likely not seen it, and I thought it was quite funny, so here it is:  https://18forty.org/podcast/channah-cohen-the-crisis-of-experience/ David Bashevkin: So I want to shift now and I want to talk about something that full disclosure, we recorded this once before and you had major hesitation for obvious reasons. It’s very sensitive what we’re going to talk about right now, but really for something much broader, not just because it’s a sensitive personal subject, but I think your hesitation has to do with what does this have to do with the subject at hand? And I hope that becomes clear, but one of the things that has always absolutely fascinated me about you and really increased my respect for you exponentially, is that you have dedicated much of your life and the focus of your research on relationships and particularly the crisis of experience in how people find and cultivate relationships. And your personal background on this subject to me really provides a lot of contexts of how I see you speaking. I’m mentioning this for two reasons. Your maiden name is? Channah Cohen: Yudkowsky. David Bashevkin: Yudkowsky. And many of our listeners, though not all of our listeners will recognize your last name. Your older brother is world famous. It’s fair to say, world famous researcher in artificial intelligence. He runs a blog that I don’t know if they’re still posting on it was called LessWrong. He wrote like a massive gazillion page fan fiction of Harry Potter. Your brother is Eliezer Yudkowsky. Channah Cohen: Yes. David Bashevkin: You shared with me one really beautiful anecdote about Eliezer that I insist on sharing because it’s so sweet. He spoke at your sheva brachos. Channah Cohen: Yes. David Bashevkin: And I would no
4c0ae091-5495-4e25-995e-ed252c265a38
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post38 What if getting strong evidence of scheming isn't the end of your scheming problems, but merely the middle? In most of our research and writing on AI control, we’ve emphasized the following situation: The AI developer is deploying a model that they think might be scheming, but they aren’t sure. The objective of the safety team is to ensure that if the model is scheming, it will be caught before successfully causing a catastrophe. For example, in our original paper on AI control, the methodology we used for evaluating safety protocols implies that once the developer is confident that the model is a schemer, they’re totally fine with undeploying the untrusted model. [1] [2] But that seems kind of unlikely. Even if AI companies find strong evidence that their AIs are trying to escape, they won’t necessarily stop deploying them. This might be justified: even a responsible actor might think that deploying a known-scheming model is worth the risks, because use of that model will allow them to drastically reduce the risk currently being imposed by the deployment of other AIs. And it might also happen even if it’s unjustified, as I discussed here (though in this case, it’s plausible that the developer will at least drastically increase the proportion of company resources going to mitigating scheming risk). [3] Either way, it seems important to think through how mitigating scheming risk looks under the assumption that getting evidence that your AI is scheming doesn’t suffice for solving your problems, including the extreme case where getting evidence that your AI is scheming has no direct value. We must explicitly consider scenarios where developers knowingly deploy scheming models. We’ve previously written about techniques that you might use once you’ve caught your AIs red-handed trying to subvert your safety measures; catching your AIs red-handed is a particularly important mechanism by which you might become confident your model is a schemer. In this post, I’ll build on that to talk about some higher-level points: Why has our research mostly considered the situation before you’re sure the model is a schemer? How does the possibility that we will need to deploy models even if we know they’re scheming affect the value of AI control? Should we do research in advance that is specifically targeted at this scenario? My conclusions are: It’s much easier to prevent the AI from egregiously attacking you (e.g. trying to backdoor your code or trying to launch a rogue internal deployment ) after you’ve caught one example of it attacking. If you are continuing to deploy known scheming models, my recommendation is basically to continue deploying with mostly the same control measures as you used before you caught it, but taking advantage of all the escape attempts you catch; I suspect this leads to a total of only 1-5x more risk than you’d already suffered due to deploying the model before catching the escape attempt, which might be worth it if the alternatives are sufficiently dangerous. After you’ve caught 10-1000 escape attempts, you probably have enough data to mostly resolve risk from escape attempts, either by training the policy to not do those escape attempts or doing something fancier. (Thanks to Ryan Greenblatt, Adam Kaufman, Zach Stein-Perlman, and Julian Stastny for comments.) Behavioral observations vs other ways of detecting scheming To start with, let’s talk about an important distinction. We often emphasize one particular mechanism by which you might become convinced that your model is a schemer: you observe it taking actions that are strong evidence that it’s a schemer. This can happen two main ways. The simple one, that I’ll emphasize here, is what we’ve often called “high-stakes failures” or concentrated failures , or recently “acute failures”. This is when you catch it taking a single action that is, upon investigation, clearly an attempt to cause an unacceptable outcome, e.g. a rogue deployment . I’ll call such actions “attacks”, and say the model is “attacking” when it takes such actions, and I’ll call catching them “catching the AI red-handed”. The other possibility is: the model exhibits behavioral patterns over the course of a large number of actions that are strong evidence that it’s trying to sabotage you. For example, it seems to be carefully manipulating research results, or repeatedly and consistently making choices in software it writes that causes the software to be more likely to have security vulnerability. In this case, no single action is overwhelming evidence that the model is a schemer, but in aggregate you get a big update. This is similar to the concept that we’ve called “low-stakes failures” in the past; Rob Wiblin suggested calling them chronic failures here ; we’re still in the market for a good term for this concept. These failures are generally much more confusing to reason about than attacks, for a bunch of reasons, so I often just discuss attacks for simplicity. Those two possibilities are what I’ll call behavioral observations: you’re directly observing the behavior that you were concerned about. But there are other non-behavioral ways you might become highly confident that your model is scheming. For example: Maybe you’ve learned that lots of other models, especially models you trained similarly to this one, are schemers. In its chain of thought, your model often talks through alignment faking reasoning (but you’ve never found examples where it concludes based on this reasoning that now would be a good time to attempt to escape). (This model is unlikely to pose much of a threat, because you can probably just monitor the chain of thought during deployment.) You did some type of red-teaming where you put your model in some altered state that allowed you to see evidence of its misalignment that isn’t a representative example of how it will try to escape. As a simple example, perhaps you somehow caused the model to believe that the control techniques are always shut down for maintenance on Sundays, and observed that the model immediately tried to escape when it believed it was Sunday. Maybe this model told you that it was a schemer when you asked, perhaps because you made a deal with it (as discussed here ). (If this happens, it’s less likely that the model will try to betray you.) Here’s a diagram summarizing the relationships between those possibilities: There are three main reasons we’ve focused on behavioral observations. Firstly, the only truly necessary difference between scheming and non-scheming models (that doesn’t rely on us being able to make use of white-box access) is that scheming models have to take actions in pursuit of seeking power. In particular, if the models want to attempt attacks (as opposed to following a strategy where they only ever engage in chronic failures), they must (at least occasionally) take actions in pursuit of their non-aligned goals that are crucially different from how they would have behaved if they weren’t scheming. So looking for malicious actions seems like the main strategy for detecting scheming that really seems like it has to be in-principle possible. In contrast, there’s no fundamental reason why the red-teaming via putting the model in an altered state “has to work”—perhaps the only mechanisms for putting models in altered states also cause them to scheme, or cause them to not scheme, or something else. And similarly, there’s no reason why you’ll necessarily observe some other AI scheming before you train and deploy this one. Secondly, having specific examples of malign model actions—especially examples from this AI—puts you in a better position. As we discuss in the “Few-shot catastrophe detection” section here , there are a bunch of strategies you can employ to reduce the risk of your AI escaping given a few examples of it attempting to escape. For example, you can learn about the strategies the model tends to employ and focus countermeasures on those, and you can use sample-efficient ML techniques to improve classifiers to make it more likely to catch future attacks. So even if finding evidence that your model is scheming has no direct value, it’s pretty likely we still want to analyze techniques in terms of how often we catch the model (e.g. the analysis here ). Finally, as Ryan Greenblatt argues here , behavioral evidence is likely to be much more legible and persuasive than most of these other types of evidence. Sources of evidence about whether your model is scheming other than behavioral observations differ in the extent to which they put you in a better position. What does control look like if you need to keep deploying schemers? Here’s the basic game plan for preventing an attack from causing catastrophe, under the assumption that detecting scheming has no direct value. The first time you catch an attack (which might be before you actually deploy the model), if you didn’t already know that the model was scheming, you update your deployment to err more on the side of safety. (That is, you should now be more willing to do less with your AI, in order to take increased inaction risk (or otherwise reap less of the benefits of deployment) in return for decreasing your action risk .) The first and later times, every time you catch the AI trying to cause an acute failure, you exploit access to that action to reduce the probability of success of future attempts. After you have a bunch of examples of acute failure attempts (including the possibility that you proliferate detected attempts), you train your policy to not take those actions. We can adapt the model from Win/continue/lose scenarios and execute/replace/audit protocols to analyze this. In that post, we assumed that the developer is safe after they catch the AI attacking once. Your probability of avoiding an acute failure here is: P(safety) = P(catch acute failure before one occurs, given that you have access to zero examples) × P(catch acute failure before one occurs, given that you have access to one example) × P(catch acute failure before one occurs, given that you have access to two examples) … So the core question here is: How much does your probability of catching improve given a particular number of examples? To discuss that, let’s go through the main ways you can take advantage of having examples of the AI trying to escape. (This list is somewhat restructured and more focused on reducing risk from continued deployment compared to the list here .) Machine learning. Training classifiers . For example, you could make coup probes , which are linear probes trained to detect whether an untrusted model is currently attacking. Before you’ve caught any attacks, you’ll have to train these using off-policy examples, because by assumption you don’t have any on-policy examples. But the probe is likely to be much more performant when you train it with on-policy examples, even with just a few of them. You could also train your trusted models; this seems less promising to me, because it’s not obvious that trusted models will be able to reliably detect attacks even given arbitrarily many on-policy examples. You could also train your untrusted monitor; this is similar to coup probes but runs into more confusing questions about what happens when you try to train schemers to do things they don’t want to do. Training the policy. After you’ve caught it enough times, you can presumably just train the policy to either not attack you, or to immediately report itself when it does so. Taking advantage of observations. Patching security vulnerabilities that the AI exploits. When the AI tries to attack and you catch it, you can try to let it think you didn’t catch it, and let it continue to take actions under close supervision. If the AI knows about security vulnerabilities that it wasn’t telling you about, it might try to exploit those. Watching this gives you the opportunity to patch those vulnerabilities (or convert them into tripwires/ honeypatches ). Observing and exploiting collusion signals. The AI is much more likely to be able to defeat countermeasures like untrusted monitoring or behavioral red-teaming if it has some way to signal to itself that now is the time to strike (see here for more). If we have examples where the AI attacks, we might be able to infer from those what the model’s collusion signals are (especially given that we can do intervention experiments where we change the provided signal and see whether the other instance of the untrusted model still colludes, as discussed under proliferation here ). This might totally resolve your problem, because it lets you train an untrusted monitor to be honest, or train a trusted model to detect presence of the collusion signal. Confronting the model. We might be able to use the examples where we caught the model as an unfakeable signal to the model that we are now confident that it’s scheming. This might make the model more willing to make deals with us (which I’ll note again we’ve discussed here ). All of these are more promising because of the possibility of proliferation . I think these seem extremely helpful. I think they require different numbers of observations: Observing and exploiting collusion signals: It’s plausible to me that even a single example of a collusion signal allows you to almost entirely resolve risk from attacks, because you might naturally be able to proliferate it substantially and then train a version of the untrusted model that will reliably report future escape attempts. Patching security vulnerabilities: If the model either only knows about a few security vulnerabilities, or if you can proliferate it into revealing many of the vulnerabilities it knows, you can maybe introduce tripwires that then have a high probability of catching subsequent escape attempts. Training classifiers: In preliminary experiments I’ve seen, linear probes trained off-policy are substantially more performant after just one on-policy example, and continue to improve with more examples. Training the policy: Surely this works eventually, but it might require hundreds or thousands of examples (though proliferation might allow you to succeed with fewer than this many escape attempts). Another reason that the situation post-catching might be less risky than you’d expect is that the schemer might adopt a very low attack rate. One reason to adopt this strategy is that it might believe your situation is much worse if you fail to detect misalignment until deployment, perhaps because if scheming is easily discovered, iteration to remove such behavior could become significantly simpler. Under these conditions, the model might rarely attempt escape. (However, if the model learns that you caught it, it might decide to behave much more brazenly, attacking more often and sandbagging more aggressively.) My takeaway here is that the risk from attacks is only 1-5x more than you’d incurred pre-first-catch, if you keep deploying the AIs almost as aggressively once you’re sure they’re schemers. My current opinion is that most research should focus on the situation before catching red-handed, basically because the situation after catching red-handed is likely to be similar except that it will be easier in hard-to-predict ways. How does the prospect of needing to deploy known schemers affect the relative value of control research? Here’s a breakdown of research that you could do related to risk from scheming, and how I think their values are affected if you assume that we’ll have to deploy known schemers. Techniques to reduce P(scheming) . Looks absolutely much better, because the situation conditional on scheming is now worse. Techniques that inform you about whether models are scheming (e.g. model organisms research, control techniques). Looks absolutely worse, but still isn’t useless (because you’ll make different tradeoffs when you deploy the model if you know it’s scheming, even if you’re unable to afford undeploying it). Some people who research model organisms think that model organisms will be very helpful for reducing P(scheming); I’m much more skeptical of that than they are, because I think model organisms too dissimilar to real scheming for us to expect techniques to transfer from the model organisms to the real schemers. Techniques that make it safer to deploy models that you know to be scheming (e.g. control techniques). Looks somewhat better, because you now believe you’ll have to rely on these methods for longer. But you’ve lost one of the plausible benefits of these techniques, which is that they might let you detect scheming and then shut down. So these techniques look absolutely better, but gained less than “reduce P(scheming)” did. Note that techniques that give you a chance at catching the AI red-handed fall into both the second and third category. In summary: Research Area Value if continuing to deploy schemers is plausible Recommended Action Reduce P(scheming) Much higher Prioritize more Detecting scheming early Somewhat lower but still valuable Maintain attention Safer deployment despite scheming Moderately higher Increase slightly What research should be done that focuses specifically on this? Sample-efficient ML One crucial question here is how to train classifiers given small numbers of examples of the model attacking, as described here . I’m aware of one project on this at the moment. (Any project that constructs probes without any on-policy examples can be converted into a study of this by additionally assuming access to a small number of on-policy examples.) Training recalcitrant policies One crucial question that arose here is: what happens when you take a schemer and train it on some small number of examples where it did something that you know you disapprove of? This could either be cases where it purposefully took a really bad action that you detected, or cases where it purposefully didn’t do something (like pointing out that another instance was attacking) that you wanted it to do. This is different from how we usually think about scheming. Usually, when we’re analyzing scheming, we talk about cases where the schemer isn’t receiving any penalty as a result of being a schemer, because (due to deceptive alignment) the schemer gets as high a reward as it would have if it were aligned [4] . The core argument for scheming is that if the schemer is behaving as well as an aligned model would have, there’s no particular reason to think that the usual process of training would remove the schemer’s misaligned goals. On the other end of the spectrum, if we had a huge number of examples where the training process penalized a schemer for its scheming, then the basic argument for concern has been defused. But here, we’re in an awkward middle ground, where neither of those limits applies: if we train the schemer on these examples, it will get some gradient pressure against its behavior, but the amount of gradient pressure is small compared to the total amount of gradient pressure applied over the course of training. This is a question about sample efficiency, which varies between domains and models. So if we want to make predictions here, we’ll have to extrapolate from empirical evidence, rather than relying on basic properties of SGD. The paper that tackles this sample-efficiency question most straightforwardly is Stress-Testing Capability Elicitation With Password-Locked Models (discussed on LessWrong here ); this tackles the “purposefully do something” case, so is potentially uninformative for the “purposefully don’t do something” case. When we’re thinking about whether to train the model on examples where it attacked, we need to think about the model’s capacity for out-of-context learning . That is: when we train the model on a particular input, it might be able to (to some extent) explicitly reason in other contexts about the fact that we’d trained it on that example. We’d probably prefer that the policy didn’t know we’d caught it trying to escape, because we don’t want the policy to learn to attack more sneakily, or to switch strategies to attacking in a different context; this means we might not want to train the policy on the example we found, preferring to instead train a classifier without updating the policy. A few other historical reasons for focusing on before-you-catch Here are a couple of other reasons that we used to focus more on the before-you-catch situation than we currently do. We first started researching AI control in the innocent days of mid 2023. Our timelines were longer, there seemed to be appetite for AI regulation at the state and federal level, and AI companies were semi-seriously talking about requiring high-assurance externally reviewed arguments that their deployments didn’t pose catastrophic risk. Back then, one framing for control was: it’s just one line of defense that you should have in conjunction with alignment. That is, you only deploy if you have both an alignment argument and a control argument. If that’s your framing, then obviously once you’ve caught the AI trying to escape, your alignment argument has been invalidated and so you have to undeploy. I now put less emphasis on this possibility, because I think it’s less likely that AI companies will produce reliable arguments for safety when deploying catastrophically dangerous AIs. Here’s a second perspective that also leads you to focus on the situation before you catch. Imagine a regulatory regime in which the AI company needs to produce a safety case whenever they’re doing a new deployment. This would take the form: We’re planning to follow protocol X. If we follow protocol X, then the total catastrophic risk caused by this deployment is less than 2% [or whatever number you think is plausible]. We justify this with evidence Y. When making this argument, if you don’t have concrete evidence that your models are scheming, I think it might make sense to ignore the possibility of deploying known-scheming models for simplicity. That is, your protocol X contains a provision saying “If we get strong evidence of scheming [according to some concrete specification], we’ll stop deploying the model.” If that deployment gets approved and you later catch your AI attacking, you’re allowed to submit another application to deploy with a new protocol, and the safety case for this protocol must incorporate the fact that you are now confident your model is scheming. You could have included this contingency in the original protocol X, with evidence required to make the safety case. But it’s potentially more efficient to skip this effort until the need for it arises, for two reasons: firstly, the model might not be scheming, and secondly, as discussed above it’s hard to predict how the situation will look different after catching red-handed. Thirdly, deploying known-scheming AIs seems more dubious from an AI welfare perspective, especially if the AIs never consent to being used. See here or here for a bit more discussion of why I don’t think AI control is necessarily immoral. In practice I expect AI companies to be pretty negligent about AI welfare and rights. Finally, when we first started talking about AI control, people in the AI safety community often thought that we were advocating recklessness. Sometimes their attitude was that we were too pessimistic about the extent to which AI companies would invest in mitigating misalignment risk. Sometimes their attitude was that even if we were descriptively accurate about investment in misalignment risk mitigations, we didn’t want to open the Overton window by describing such reckless approaches. I think that the passage of time validates our pessimism about investment into misalignment risk mitigation. We haven’t gotten conclusive evidence on whether it’s a strategic error to talk openly about the reckless strategies that are likely to be among the best interventions on the margin, but I still think that the benefits of increased research on realistic safety interventions outweigh the harms. ^ Specifically, this arises because of our definition of our two metrics, safety and usefulness. Safety is about the probability of a catastrophe occuring, and usefulness is about the number of tasks you complete correctly. Safety is measured using the red-team-produced attack policy as the implementation of the untrusted model, while usefulness is measured using the original untrusted model. So safety and usefulness are measured with different experiments. When these experiments are being run, the protocol does not have direct access to which experiment it’s in. But if it finds conclusive evidence that it’s in a safety experiment (e.g. because it catches an attack attempt), it now is sure it’s not being evaluated for usefulness, and so doesn’t care about the proportion of tasks solved correctly on this run. ^ By “stop deploying”, I mean “revert to only giving model access to a small set of people, and ensure that those people know the model can’t be trusted”. We shouldn’t totally stop the deployment of the model. ^ A closely related situation that leads to similar dynamics is: - An AI company catches AIs trying to escape. - They try to understand and address the cause of misalignment, without any particularly principled strategy to do so. - They train a new model with the fix and deploy it. (And now a reasonable observer would have P(scheming) ~= 60% or something.) ^ Ignoring exploration hacking for simplicity.
4a201152-7ebd-4d2e-975d-e07d6a4991bb
trentmkelly/LessWrong-43k
LessWrong
Rationalist Clue (not by Parker Bros., or, for that matter, Waddingtons) A response to: 3 Levels of Rationality Verification Related to: Diplomacy as a Game Theory Laboratory It's a classic who-dun-it…only instead of using an all-or-nothing process of elimination driven by dice rolls and lucky guesses, players must piece together Bayesian clues while strategically dividing their time between gathering evidence, performing experiments, and interrogating their fellow players! ROOMS: Hematology Lab -- bring a blood sample over here, and you can find out one bit of information about the serotype…is it A or O? B or O? + or - ? The microscope knows. Don't know how to use a microscope? Try reading up on it in the Library. Autopsy Table -- bring a body part over here, and you can take an educated guess as to what kind of weapon caused the murder wounds. (Flip over 1 of 10 cards, 4 of which show the correct murder weapon and 6 of which are randomly distributed among the other five weapons.) The error in guesses doesn't correlate across body parts, so if you personally examine enough of them, or if you can persuade your fellow dinner-guests to bust out all the body parts at the same time, you should be able to get the right answer. Lie Detector -- bring a fellow player over here, and you can ask him/her a yes-or-no question and get some evidence as to whether it was answered honestly. Have the answerer roll a D10, add a secret constant unique to his/her character, multiply the sum by the truth value of his/her statement (1 for true, 2 for false), and then look up the number on a chart that returns the value "stress" or "no-stress."  It's up to the interrogator to figure out the correlation (if any) between stress and lying for each player! How do you get the other player into the Lie Detector room in the first place? Good question! When you figure it out, let me know…I have a paper I'd like you to co-author with me on the Prisoner's Dilemma. Plus the usual collection of Kitchens, Bil
b497f6fb-1108-4a5b-9298-fc0f0aa859b8
trentmkelly/LessWrong-43k
LessWrong
more on predicting agents Suppose you want to predict the behavior of an agent. I stand corrected. To make the prediction, as a predictor you need: * observations of the agent * the capacity to model the agent to a sufficient degree of accuracy "Sufficient accuracy" here is a threshold on, for example, KL divergence or perhaps some measure that depends on utilities of predictions in the more complex case. When we talk about the intelligence of a system, or the relative intelligence between agents, one way to think of that is the ability for one agent to predict another.  Consider a game where an agent, A, acts on the basis of an arbitrarily chosen polynomial function of degree k. A predictor, P, can observe A and build predictive models of it. Predictor P has the capacity to represent predictive models that are polynomial functions of degree j. If j > k, then predictor P will in principal be able to predict A with perfect accuracy. If j < k, then there most of the time be cases where P predicts inaccurately. If we say (just for the sake of argument) that perfect predictive accuracy is the test for sufficient capacity, we could say that in the j < k case P does not have sufficient capacity to represent A. When we talk about the relative intelligence between agents in an adversarial context, this is one way to think about the problem. One way that an agent can have a decisive strategic advantage over another is if it has the capacity to predict the other agent and not vice-versa. The expressive power of the model space available to P is only one of the ways in which P might have or not have capacity to predict A. If we imagine the prediction game extended in time, then the computational speed of P--what functions it can compute within what span of real time--relative to the computational speed of A could be a factor. Note that these are ways of thinking about the relative intelligence between agents that do not have anything explicitly to do with "optimization power" or a utility
f6197e0c-b67b-41fb-9a5b-7f4667084b71
trentmkelly/LessWrong-43k
LessWrong
Proposal for improving the global online discourse through personalised comment ordering on all websites This is a description of a project I’ve been thinking about making in the last few weeks. I’m looking for feedback and collaborators. [BetterDiscourse] name is provisional. Updated on 9 Dec 2023: Completely overhauled the "Data and training" section, which no longer critically relies on LessWrong as a source of the initial training data and as a testing ground. Problem Social media promote tribalism and polarisation. Communities drift towards groupthink even if they were specifically founded to be strongholds for rational discourse and good epistemics. Comments sections to media articles and YouTube videos are frequently dominated by support of the author (or the pundit guest of the show), finding rational critique there is time consuming, like finding a needle in a haystack. X’s Community Notes, pol.is, and its development variant viewpoints.xyz are universally louded, but they all use the principle of “finding the most uncontroversial common ground” which is by definition low-information for the discourse participants (because most of them should already be expected to be on board with this common ground). Big Audacious Goal aka mission statement: improve the global online discourse. Among humans, anyway, but perhaps AI debate could benefit from the mechanics that I describe below as well. Solution Each user has a state-space model (SSMs are on fire right now) that represents their levels of knowledge in this or that fields, beliefs, current interests, ethics, and aesthetics, in order to predict whether the user will find a particular comment insightful/interesting/surprising, clarifying, mind-changing, or reconciling. To provide feedback to the model, a browser extension adds the corresponding reactions (along with negative counterparts to the above reactions: “Nothing new”, “Unclear/Muddled”, “Disagree”, “Combative/Inflaming”) to the comment on the popular platforms: Reddit, YouTube, X/Twitter. (During a later stage, [BetterDiscourse] may also host comm
9e475185-b7f1-4818-87ba-5edd9f691cf8
trentmkelly/LessWrong-43k
LessWrong
Apologizing is a Core Rationalist Skill There’s this narrative about a tradeoff between: * The virtue of Saying Oops, early and often, correcting course rather than continuing to pour oneself into a losing bet, vs * The loss of social status one suffers by admitting defeat, rather than spinning things as a win or at least a minor setback, or defending oneself. In an ideal world - goes the narrative - social status mechanisms would reward people for publicly updating, rather than defending or spinning their every mistake. But alas, that’s not how the world actually works, so as individuals we’re stuck making difficult tradeoffs. I claim that this narrative is missing a key piece. There is a social status mechanism which rewards people for publicly updating. The catch is that it’s a mechanism which the person updating must explicitly invoke; a social API which the person updating must call, in order to be rewarded for their update. That social API is apologizing. Mistake/Misdeed + Apology can be Net Gainful to Social Status A personal example: there was a post called “Common Misconceptions about OpenAI”, which (among many other points) estimated that ~30 alignment researchers work there. I replied (also among many other points): > I'd guess that is an overestimate of the number of people actually doing alignment research at OpenAI, as opposed to capabilities research in which people pay lip service to alignment. In particular, all of the RLHF work is basically capabilities work which makes alignment harder in the long term (because it directly selects for deception), while billing itself as "alignment". There was a lot of pushback against that. Paul Christiano replied “Calling work you disagree with ‘lip service’ seems wrong and unhelpful.”. I clarified that I was not using this as a generic criticism of something I disagreed with, but rather that RLHF seemed so obviously antihelpful to alignment that I did not expect most people working on it had actually thought about whether it would help, but
58f81e9f-cb01-412b-8716-cef39d54b9aa
trentmkelly/LessWrong-43k
LessWrong
Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more In the fields of user experience and accessibility, everyone talks about the curb cut effect: Features that are added as accommodations for people with disabilities sometimes become widely useful and beloved. But not every accommodation becomes a "curb cut," and I've been thinking about other patterns that come up when accommodations intersect with wider society. The original Curb Cut Effect The eponymous curb cut -- the place at the intersection where the sidewalk slopes down to the street instead of just dropping off -- is most obviously there to for wheelchair users. But it's also great for people who are pulling a suitcase, runners who want to avoid jarring their ankles, and people who are walking their bikes. Universal captioning on TV, movies, and video is nominally for Deaf or hearing-impaired people, but captions are handy to anyone who's watching TV in a noisy restaurant, or trying to make sense of a show with artistically muddy audio, or trying to watch a video at 3x speed and the audio is unintelligible. When we make products easier to use, or spaces easier to access, it's not just some essentialized group of people with disabilities who benefit -- accessibility is good for everyone. Why the idea is useful: First, it breaks down the perspective of disability accommodations as being a costly charity where "we" spend resources to help "them." Further, it breaks down the idea of disability as an essentialized, either-or, othered type of thing. Everybody has some level of difficulty accessing parts of the world some of the time, and improving accessibility is an inherent part of good design, good thinking, and good communication.[1] Plus, it's cool to be aware of all the different ways we can come up with to hack our experience of the world around us! I think there's also a dark side to the idea -- a listener could conclude that we wouldn't invest in accommodations if they didn't happen to help people without disabilities. A just and compassionate society
6336c45c-74d3-4c39-bbe3-538f9cba9035
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1240 Part 8 of 12 in the Engineer’s Interpretability Sequence . As AI safety researchers, we care a lot about deceptive alignment. So what is it? Hubinger et al. (2019) write: If the mesa-optimizer has an objective that extends across parameter updates, then it will be incentivized to avoid being modified, as it might not pursue the same objective after modification (with the result that its current objective will not be achieved in future iterations). This means that the mesa-optimizer will be instrumentally incentivized to act as if it is optimizing the base objective function, even if its actual mesa-objective is something else entirely. We will refer to this hypothetical phenomenon as deceptive alignment. Deceptive alignment is a form of instrumental proxy alignment, as fulfilling the base objective is an instrumental goal of the mesa-optimizer. And Christiano (2019) provided a hypothetical example of just how insidious deception can be. Consider a deceptive agent that wants to take over the world but doesn’t know if it’s being simulated in training or if it’s been deployed in the real world. Suppose the agent reasons that after behaving for long enough, it ought to be deployed. So it might adopt the following policy. Pick a cryptographic puzzle (e.g. “factor RSA-2048”). When it sees a solution to that puzzle, it behaves badly. Otherwise, it behaves well. This means that the deceptive model will appear throughout training to be perfectly aligned but will eventually make a treacherous turn in deployment. Specific hypotheticals aside, deception has become a major point of interest in AI safety research because unlike other types of failures, deceptive ones are not the types of things we have good feedback signals for fixing. We won’t be able to find them by looking at models’ performance in development. And we aren’t likely to surmise them from analysis of its objectives and environment (e.g. using policy/value iteration) – if we could derive optimal behavior from objectives for some task, advanced AI would probably be overkill. Deceptive alignment can be similar to non-deceptive forms of misalignment. The definition from Hubinger et al. (2019) may be a little narrow. The authors say that for a model to be deceptive, it (1) has to have an objective extending across parameter updates (2) has to be able to model that is being selected to achieve a base objective, and (3) must expect the threat of modification. These three things give rise to a particularly worrying scenario in which an AI system would actively try to deceive us. They also immediately suggest ways to avoid this story by trying to develop the system in a way that violates these requirements and avoids this problem in the first place. But suppose that despite our best effort, we end up with a deceptively aligned system on our hands. Now what do we do? At this point, the problem of detecting and fixing deception becomes quite similar to just detecting and fixing problems with the model in general – except for one thing. Deceptive alignment failures are triggered by inputs that are, by definition, hard to find during training. But it’s possible for other types of problems to be hard to find during development that don’t fit all of the requirements that Hubinger et al. (2019) list. And for this reason, when we take off our “develop the model” hat and out on our “diagnose and debug the model” hat, the definition from Hubinger et al. (2019) becomes less important. So from the point of view of an engineer wearing their “diagnose and debug the model” hat, deceptive alignment and other insidious inner alignment failures are issues where the model will betray us as the result of (1) a trigger that is hard to find during development and (2) probably a substantial amount of inner neural circuitry. Deception ≈ Trojans Notice that the definition above applies to trojans (a.k.a. backdoors). Trojans are weaknesses that can be implanted into a model (e.g. by an adversary) which cause it to exhibit an unexpected behavior as the result of some trigger – often a very subtle feature. Because of the close connections between trojans and deception, the AI safety research community should continue to pay close attention to this work. There is already a substantial amount of work on this involving methods for detecting, reconstructing, and removing trojans. There may be great opportunities to build off of work from research Wu et al. (2022) and Casper et al. (2023) . However, it is also important to be aware that trojans are usually implanted by poisoning a dataset – usually by just inserting the desired trojan trigger into images. But this is not how deception would naturally arise. Wenger et al. (2022) found that more subtle ways of implanting trojans into networks caused a number of trojan detection methods to fail. By merely relabeling examples that had certain natural occurrences of trigger objects, they were able to implant weaknesses much more subtly than normal data poisoning. However, in Casper et al. (2023) , we find that these types of attacks (which we call “natural feature” trojans) can still sometimes be detected by feature synthesis tools. EDIT: In the comments, Charlie Steiner pointed out another key difference between trojan detection and deceptive alignment. Most work studying trojans involves the model quickly misbehaving upon seeing the trigger. But a treacherous turn may be a gradual process that is not immediately evident (e.g. power seeking). This seems challenging to detect and different from the type of problem addressed by existing trojan work. It may be valuable in the future to study trojans which cause models to misbehave in subtle ways like this. Mechanistic interpretability is not uniquely capable of fixing insidious misalignment. It seems doubtful that mechanistic interpretability will be necessary or sufficient for fixing insidious misalignment. In the past few months, there seems to have been a subtle shift among AI safety researchers away from human-in-the-loop mechanistic interpretability and toward other things with model internals. This seems likely to continue, and I think it will get us much further if we don’t rely on humans in the loop. The one nice thing about deceptive and other insidiously inner-misaligned systems is that even though the inputs that trigger their bad behavior will be hard to find, there will likely be a good amount of inner neural circuitry dedicated to them making the treacherous turn. While it will be almost impossible to factor RSA-2048 during a training simulation to see what a potentially insidiously misaligned model does, it will probably be much easier to give it synthetic hallucinations! Later, EIS XI will discuss how latent adversarial training (Jermyn, 2022) (a form of relaxed adversarial training (Hubinger, 2019) ) seems to be a compelling strategy for fixing insidious misalignment and something we should start working on in large language models yesterday! Questions Would you argue that mechanistic interpretability is either necessary or sufficient for fixing insidious misalignment? Do you think interpretability or non-interpretability methods are more promising for fixing insidious misalignment?
2e61924a-d21d-4073-8496-461efd57cb78
trentmkelly/LessWrong-43k
LessWrong
Double Tongue Whistling I can whistle about seven notes per second, which corresponds to a reel at 105bpm. [1] While this isn't a problem for whistling basslines it's slightly too slow for melodies at contra dance speed (~110-122bpm). I want to figure out how to whistle faster, and I know there are people who whistle faster, but I don't know how it's usually done. I see two main routes: * Do what I currently do, but faster. * Figure out how to do something else. The former doesn't seem very promising: I've been playing around with whistling for decades and I suspect I'm pretty close to a local maximum with my current approach. On the other hand, I'm just trying to get from seven notes per second to eight, which seems like it might be possible? The latter is pretty open, and probably involves doing something that's slower at first but will eventually be faster. The main problem is, how do I know that after I put in all that effort I'll actually end up with something faster? Ideally there would be people demonstrating on youtube or something, with "here's how to whistle quickly" videos, but I'm not seeing that. Still, it seems like some sort of double-tonguing should work. Normally when I whistle I mark the notes with my glottis, the same as the two glottal stops in "uh-oh". A different option, though, would be to mark the notes by making a velar closure with my tongue, the same as the two velar stops in "cook" ("k"). And then I could alternate between them, which seems like it should let me get up to twice the speed of the slower one. Here's what the three sound like: I find velar stops harder, partly because it's not what I'm used to, and partly because I'm already using my tongue to form the whistle. Currently I can them 4-5 times per second. When alternating I can do a little better, 5-6 times per second, but that's still less than the 8-10 you'd expect from doubling my velar-only speed. I can go te-ke-te-ke with closures 10-11 times per second, so I am optimistic. Has anyon
4b62784e-5517-4db0-b0c8-c58ce8faee2c
trentmkelly/LessWrong-43k
LessWrong
Meetup : Stockholm meetup Discussion article for the meetup : Stockholm meetup WHEN: 09 October 2011 02:00:00PM (+0200) WHERE: Café Tabac (Stora Nygatan 46, Gamla Stan, Stockholm, Sweden) Where there are nerds there are Swedes, and LessWrong is no exception. Let's meet up! The first event, as normal, will be more about introducing ourselves, sharing interests, and generally having a good time. We will then decide what would interest us further (rationalist book club? Regular discussion of LW material? Scientific/math/CS/language subjects? Social/political activism? etc.) If you are interested but cannot attend this meetup for whatever reason, leave a comment! We want to know you're out there! (Also, the date is flexible) Disclaimer: I don't live in Stockholm so I don't have much experience with its venues in the daytime. I tried to pick something in a central location and without terrible reviews, but locals should feel free to recommend alternatives. In particular, if you know a library that has a café or meeting room, I believe that would be ideal; biblioteket.se is rather unhelpful to that end. Discussion article for the meetup : Stockholm meetup
7cc30969-5287-4f37-a4d5-9e8ac00ea221
trentmkelly/LessWrong-43k
LessWrong
Causal graphs and counterfactuals A problem that's come up with my definitions of stratification. Consider a very simple causal graph: . In this setting, A and B are both booleans, and A=B with 75% probability (independently about whether A=0 or A=1). Suppose I now want to compute the counterfactual: suppose I assume that B=0 when A=0. What would happen if A=1 instead? The problem is that P(B|A) seems insufficient to solve this. Let's imagine the process that outputs B as a probabilistic mix of functions, that takes the value of A and outputs that of B. There are four natural functions here: * f0(x)=0 * f1(x)=1 * f2(x)=x * f3(x)=1−x Then one way of modelling the causal graph is as a mix 0.75f2+0.25f3. In that case, knowing that B=0 when A=0 implies that P(f2)=1, so if A=1, we know that B=1. But we could instead model the causal graph as 0.5f2+0.25f1+0.25f1. In that case, knowing that B=0 when A=0 implies that P(f2)=2/3 and P(f0)=1/3. So if A=1, B=1 with probability 2/3 and B=1 with probability 1/3. And we can design the node B, physically, to be one or another of the two distributions over functions or anything in between (the general formula is (0.5+x)f2+x(f3)+(0.25−x)f1+(0.25−x)f0 for 0≤x≤0.25). But it seems that the causal graph does not capture that. Owain Evans has said that Pearl has papers covering these kinds of situations, but I haven't been able to find them. Does anyone know any publications on the subject?
58f0d5c3-2f80-4054-bed2-14c7770e3849
trentmkelly/LessWrong-43k
LessWrong
Main vs. Discussion Posts on this forum are now divided into two categories: Main and Discussion. Both are shown on the front page, but only Main posts have a preview; Discussion posts are shown only by title. The rationale for making this change is that I would like this forum to provide a place for doing research as well as a place for presenting thought-out research to an interested audience, but I don't want the former to interfere with the latter. Thus, Discussion is intended for things like going through the details of a technical proof, or posting a quickly-written argument that leaves many steps to be filled in by the reader; Main intended is for things that are interesting and easy-to-read to a typical person who has been involved in discussions about these topics for a while. (Partially inspired by a comment of Vladimir's.)
34f8fb4f-804f-488e-b951-09e7bc41d047
trentmkelly/LessWrong-43k
LessWrong
Reflections on Zen and the Art of Motorcycle Maintenance I read Zen and the Art of Motorcycle Maintenance by Robert Pirsig twice in 2019. I wrote reflections on many of the chapters as I went through the second time. I posted them to a Goodreads book group at the time, but I've decided to collect them here as well (unedited) because I keep referring to them and I'm tired of saying "but you may need a Goodreads account". They're chapter-by-chapter (though I did not have any notes for some chapters), so I don't know that they'll make any sense if you've never read the book. There's nothing like a thesis here, and it's not a distillation. These are literally my journal entries. I don't know what you're likely to get out of this if you read it, but I think Zen and Motorcycles is important, and there are a lot of people who seem to like it when I think things and then say which things I thought. That's what this is. Chapter 1 I like how vividly Pirsig evokes the anti-technology feels in me. I'm very sympathetic with John and Sylvia at the end of this chapter. I'm strongly reminded of my recent visit to the San Francisco Museum of Modern Art. I did not have type-1 fun while I was there. In fact I became quite upset, and when I left I had the thought "burn that place to the ground". I think it's because of what so many artists were doing in the second half of the 20th century. If the SF MOMA collection is representative, I think modern artists have been grappling with how to reconcile technology with human brains, mainly by demonstrating that doing so isn't automatic. I tried to pick out the least unpleasant pieces there to spend time with, but even given that selection criterion, two out of three of the pieces I really sat with made me feel isolated, disoriented, jarred, betrayed, and in contact with something horribly alien. I've wondered about why I responded that way to so much of the art there. The heart of it seems to be that it's all "unnatural" and "inhuman". Popular elements include sharp angles, straight lines, s
d0addfbc-9569-4943-b421-68c6ab9423fe
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL 6.1 Post summary / Table of contents ==================================== *Part of the*[*“Intro to brain-like-AGI safety” post series*](https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8)*.* Thus far in the series, [Post #1](https://www.alignmentforum.org/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why) set out some definitions and motivations (what is “brain-like AGI safety” and why should we care?), and Posts [#2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in) & [#3](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and) split the brain into a Learning Subsystem (telencephalon and cerebellum) that “learns from scratch” using learning algorithms, and a Steering Subsystem (hypothalamus and brainstem) that is mostly genetically-hardwired and executes innate species-specific instincts and reactions. Then in [Post #4](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor), I talked about the “short-term predictor”, a circuit which learns, via supervised learning, to predict a signal in advance of its arrival, but only by perhaps a fraction of a second. [Post #5](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and) then argued that if we form a closed loop involving *both* a set of short-term predictors in the Learning Subsystem *and* a corresponding set of hardwired circuits in the Steering Subsystem, we can get a “long-term predictor”. I noted that the “long-term predictor” circuit is closely related to [temporal difference (TD) learning](https://en.wikipedia.org/wiki/Temporal_difference_learning). Now in this post, we fill in the last ingredients—roughly the “actor” part of [actor-critic reinforcement learning](https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/) (RL)—to get a whole big picture of motivation and decision-making in the human brain. (I’m saying “human brain” to be specific, but it would be a similar story in any other mammal, and to a lesser extent in any vertebrate.) The reason I care about motivation and decision-making is that if we eventually build brain-like AGIs (cf. [Post #1](https://www.alignmentforum.org/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why)), we’ll want to build them so that they have some motivations (e.g. being helpful) and not others (e.g. escaping human control and self-reproducing around the internet). Much more on that topic in later posts. *Teaser for upcoming posts:* The next post ([#7](https://www.alignmentforum.org/posts/zXibERtEWpKuG5XAC/intro-to-brain-like-agi-safety-7-from-hardcoded-drives-to)) will walk through a concrete example of the model in this post, where we can watch an innate drive lead to the formation of an explicit goal, and adoption and execution of a plan to accomplish it. Then starting in [Post #8](https://www.alignmentforum.org/posts/fDPsYdDtkzhBp9A8D/intro-to-brain-like-agi-safety-8-takeaways-from-neuro-1-2-on) we’ll switch gears, and from then on you can expect substantially *less* discussion of neuroscience and *more* discussion of AGI safety (with the exception of one more neuroscience post towards the end). Unless otherwise mentioned, everything in this post is “things that I believe right now”, as opposed to neuroscience consensus. *(Pro tip: there is never a neuroscience consensus.)* Relatedly, I will make minimal effort to connect my hypotheses to others in the literature, but I’m happy to chat about that in the comments section or by [email](mailto:steven.byrnes@gmail.com). *Table of contents:* * In Section 6.2, I’ll present a big picture of motivation and decision-making in the human brain, and walk through how it works. The rest of the post will go through different parts of that picture in more detail. If you’re in a hurry, I suggest reading to the end of Section 6.2 and then quitting. * In Section 6.3, I’ll talk about the so-called “Thought Generator”, comprising (I think) the dorsolateral prefrontal cortex, sensory cortex, and other areas. (For ML readers familiar with “actor-critic model-based RL”, the Thought Generator is more-or-less a combination of the “actor” and the “model”.) I’ll talk about the inputs and outputs of this module, and briefly sketch how its algorithm relates to neuroanatomy. * In Section 6.4, I’ll talk about how values and rewards work in this picture, including the reward signal that drives learning and decision-making in the Thought Generator. * In Section 6.5, I’ll go into a bit more detail about how and why thinking and decision-making needs to involve not only simultaneous comparisons (i.e., a mechanism for generating different options in parallel and selecting the most promising one), but also sequential comparisons (i.e., thinking of something, then thinking of something else, and comparing those two thoughts). For example, you might think: “Hmm, I think I’ll go to the gym. Actually, what if I went to the café instead?” * In Section 6.6, I’ll comment on the common misconception that the Learning Subsystem is the home of ego-syntonic, internalized “deep desires”, whereas the Steering Subsystem is the home of ego-dystonic, externalized “primal urges”. I will advocate more generally against thinking of the two subsystems as two agents in competition; a better mental model is that the two subsystems are two interconnected gears in a single machine. 6.2 Big picture =============== Yes, this is literally a big picture, unless you’re reading on your cell phone. You saw a chunk of it in the [previous post (Section 5.4)](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and), but now there are a few more pieces. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028195/mirroredImages/qNZSBqLEh4qLRqgWW/agfx1snmxiprtyuyjlky.png)The big picture—The whole post will revolve around this diagram. Note that the bracketed neuroanatomy labels are a bit oversimplified.There’s a lot here, but don’t worry, I’ll walk through it bit by bit. 6.2.1 Relation to “two subsystems” ---------------------------------- Here’s how this diagram fits in with my “two subsystems” perspective, first discussed in [Post #3](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028196/mirroredImages/qNZSBqLEh4qLRqgWW/nzr2vs3mwj5kdnuhv2lg.png)Same as above, but the [two subsystems](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and) are highlighted in different colors.6.2.2 Quick run-through ----------------------- Before getting bogged down in details later in the post, I’ll just talk through the diagram: *1. Thought Generator generates a thought:* The Thought Generator settles on a “thought”, out of the high-dimensional space of *every thought you can possibly think* at that moment. Note that this space of possibilities, while vast, is constrained by current sensory input, past sensory input, and everything else in your learned world-model. For example, if you’re sitting at a desk in Boston, it’s generally not possible for you to *think* that you’re scuba-diving off the coast of Madagascar. But you *can* make a plan, or whistle a tune, or recall a memory, or reflect on the meaning of life, etc. *2. Thought Assessors distill the thought into a “scorecard”:* The Thought Assessors are a set of perhaps hundreds or thousands of “short-term predictor” circuits [(Post #4)](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor), which I discussed more specifically in [the previous post (#5)](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and). Each predictor is trained to predict a different signal from the Steering Subsystem. From the perspective of a Thought Assessor, everything in the Thought Generator (not just outputs but also latent variables) is [*context*](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor)—information that they can use to make better predictions. Thus, if I’m thinking the thought “I’m going to eat candy right now”, a thought-assessor can predict “high probability of tasting something sweet very soon”, based *purely on the thought*—it doesn’t *need* to rely on either external behavior or sensory inputs, although those can be relevant context too. *3. The “scorecard” solves the interface problem between a learned-from-scratch world model and genetically-hardwired circuitry:* Remember, the current thought and situation is an insanely complicated object in a high-dimensional learned-from-scratch space of “all possible thoughts you can think”. Yet we need the relatively simple, genetically-hardwired circuitry of the Steering Subsystem to analyze the current thought, including issuing a judgment of whether the thought is high-value or low-value (see Section 6.4 below), and whether the thought calls for cortisol release or goosebumps or pupil-dilation, etc. The “scorecard” solves that interfacing problem! It distills any possible thought / belief / plan / etc. into a genetically-standardized form that can be plugged directly into genetically-hardcoded circuitry. *4. The Steering Subsystem runs some genetically-hardwired algorithm:* Its inputs are (1) the scorecard from the previous step and (2) various other information sources—pain, metabolic status, etc., all coming from its own brainstem sensory-processing system (see [Post #3, Section 3.2.1](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and#3_2_1_Each_subsystem_generally_needs_its_own_sensory_processor)). Its outputs include emitting hormones, motor commands, etc., as well as sending the “ground truth” supervisory signals shown in the diagram.[[1]](#fn49k95mpuwpr)  *5. The Thought Generator keeps or discards thoughts based on whether the Steering Subsystem likes them:* More specifically, there’s a ground-truth value (a.k.a. reward, yes I know those don’t sound synonymous, see [Post #5, Section 5.3.1](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_3_1_Switch__i_e___value___expected_next_reward__vs_summation__i_e___value___expected_sum_of_future_rewards__)). When the value is very positive, the current thought gets “strengthened”, sticks around, and can start controlling behavior and summoning follow-up thoughts, whereas when the value is very negative, the current thought gets immediately discarded, and the Thought Generator summons a new thought instead. *6. Both the Thought Generator and the Thought Assessor* [*“learn from scratch”*](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in) *over the course of a lifetime, thanks in part to these supervisory signals from the Steering Subsystem.* Specifically, the Thought Assessors learn to make better and better predictions of their “ground truth in hindsight” signal (a form of Supervised Learning—see [Post #4](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor)), while the Thought Generator learns to disproportionately generate high-value thoughts. (The Thought Generator learning-from-scratch process *also* involves predictive learning of sensory inputs—[Post #4, Section 4.7](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor#4_7__Short_term_predictor__example__2__Predictive_learning_of_sensory_inputs_in_the_cortex).) 6.3 The “Thought Generator” =========================== 6.3.1 Overview -------------- Go back to the big-picture diagram at the top. At the top-left, we find the Thought Generator. In terms of actor-critic model-based RL, the Thought Generator is roughly a combination of “actor” + “model”, but not “critic”. (“Critic” was discussed in the previous post, and more on it below.) At our somewhat-oversimplified level of analysis, we can think of the “thoughts” generated by the Thought Generator as a combination of *constraints* (from predictive learning of sensory inputs) and *choices* (guided by reinforcement learning). In more detail: * *Constraints* on the Thought Generator  come from sensory input information, and ultimately from predictive learning of sensory inputs ([Post #4, Section 4.7](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor#4_7__Short_term_predictor__example__2__Predictive_learning_of_sensory_inputs_in_the_cortex)). For example, I *cannot*think the thought: *There is a cat on my desk and I’m looking at it right now.* There is no such cat, regrettably, and I can’t just will myself to see something that obviously isn’t there. I can *imagine* seeing it, but that’s not the same thought. * But *within*those constraints, there’s more than one possible thought my brain can think at any given time. It can call up a memory, it can ponder the meaning of life, it can zone out, it can issue a command to stand up, etc. I claim that these “choices” are decided by a reinforcement learning (RL) system. This RL system is one of the main topics of this post. 6.3.2 Thought Generator inputs ------------------------------ The Thought Generator has a number of inputs, including sensory inputs and hyperparameter-shifting neuromodulators. But the main one of interest for this post is [ground-truth value, a.k.a. reward](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_3_1_Switch__i_e___value___expected_next_reward__vs_summation__i_e___value___expected_sum_of_future_rewards__). I’ll talk about that in more detail later, but we can think of it as *an estimate of whether a thought is good or bad*, operationalized as “worth sticking with and pursuing” versus “deserving to be discarded so we can re-roll for a new thought”. This signal is important *both*for learning to think better thoughts in the future, *and* for thinking good thoughts right now: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028195/mirroredImages/qNZSBqLEh4qLRqgWW/u0qr1wh3qejxbicb89td.png)6.3.3 Thought Generator outputs ------------------------------- There are meanwhile a lot of signals going *out* of the Thought Generator. Some are what we intuitively think of as “outputs”—e.g., skeletal motor commands. Other outgoing signals are, well, a bit funny… Recall the idea of “context” from [Section 4.3 of Post #4](https://www.alignmentforum.org/posts/Y3bkJ59j4dciiLYyw/intro-to-brain-like-agi-safety-4-the-short-term-predictor#4_3_Terminology__Context__Output__Supervisor): The Thought Assessors are short-term predictors, and a short-term predictor can in principle grab any signal in the brain and leverage it to improve its ability to predict its target signal. So if the Thought Generator has a world-model, then *somewhere* in the world-model is a configuration of latent variable activations that encode the concept “baby kittens shivering in the cold rain”. We wouldn’t normally think of those as “output signals”—I just said in the last sentence that they’re latent variables! But as it happens, the “will lead to crying” Thought Assessor has grabbed a copy of those latent variables to use as context signals, and gradually learned through experience that these particular signals are strong predictors of me crying. Now, as an adult, these “baby kittens in the cold rain” neurons in my Thought Generator are living a double-life: * They are latent variables in my world-model—i.e., they and their web of connections will help me parse an image of baby kittens in the rain, if I see one, and to reason about what would happen to them, etc. * Activating these neurons, e.g. via imagination, is *a way for me to call up tears on command*. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028196/mirroredImages/qNZSBqLEh4qLRqgWW/ky1mxq7zyntyzjetadno.png)The Thought Generator (top left) has two types of outputs: the “traditional” outputs associated with voluntary behavior (green arrows) and the “funny” outputs wherein even latent variables in the model can directly impact involuntary behaviors (blue arrows).6.3.4 Thought Generator neuroanatomy sketch ------------------------------------------- AUTHOR’S NOTE: When I first published this blog post, this section contained a discussion and diagrams of cortico-basal ganglia-thalamo-cortical loops, but it was very speculative and turned out to be wrong in various ways. It’s not too relevant for the series anyway, so I’m deleting it. I’ll write a corrected version at some point. Sorry! Here’s the updated dopamine diagram from the previous post: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028195/mirroredImages/qNZSBqLEh4qLRqgWW/gctj0zd4idarbywiqip8.png)The “mesolimbic” dopamine signals on the right were discussed in the [previous post (Section 5.5.6)](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_5_6_It_offers_a_nice_explanation_for_the_diversity_of_dopamine_neuron_activity). The “mesocortical” dopamine signal on the left is new to this post. (I think there are *even more* dopamine signals in the brain, not shown here. They’re off-topic for this series, but see discussion [here](https://www.alignmentforum.org/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine).)There are many more implementation details inside the Thought Generator that I’m not discussing. However, this bare-bones section is more-or-less sufficient for my forthcoming posts on AGI safety. The gory details of the Thought Generator, like the gory details of almost everything else in the Learning Subsystem, are mainly helpful for *building* AGI. 6.4 Values and rewards ====================== 6.4.1 The cortex proposes a “value” estimate, but the Steering Subsystem may choose to override ----------------------------------------------------------------------------------------------- There are two “values” in the diagram (it looks like three, but the two red ones are the same): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028195/mirroredImages/qNZSBqLEh4qLRqgWW/us3nmzlidqer6zjpi94e.png)Two types of “value” in my modelThe blue-circled signal is the value estimate from the corresponding Thought Assessor in the cortex. The red-circled signal (again, it’s one signal drawn twice) is the corresponding “ground truth” for what the value estimate should have been. (Recall that “ground-truth value” is a synonym for “reward”; yes I know that sounds wrong, see [previous post (Section 5.3.1)](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_3_1_Switch__i_e___value___expected_next_reward__vs_summation__i_e___value___expected_sum_of_future_rewards__) for discussion.) Just like the other “long-term predictors” discussed in [the previous post](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and), the Steering Subsystem can choose between “defer-to-predictor mode” and “override mode”. In the former, it sets the red equal to the blue, as if to say “OK, Thought Assessor, sure, I’ll take your word for it”. In the latter, it ignores the Thought Assessor’s proposal, and its own internal circuitry outputs some different value.[[2]](#fng0mylfbpkdp) **Why might the Steering Subsystem override the Thought Assessor’s value estimate?** Two factors: * First, the Steering Subsystem might be acting on information from *other* (non-value) Thought Assessors. For example, in the Dead Sea Salt Experiment (see [previous post, Section 5.5.5](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_5_5_It_explains_the_dead_sea_salt_experiment)), the value estimator says “bad things are going to happen”, but meanwhile the Steering Subsystem is getting an “I’m about to taste salt” prediction in the context of a state of salt-deprivation. So the Steering Subsystem says to itself “Whatever is happening now is very promising; the value estimator doesn’t know what it’s talking about!” * Second, the Steering Subsystem might be acting on its own information sources, independent of the Learning Subsystem. In particular, the Steering Subsystem has its own sensory-processing system (see [Post #3, Section 3.2.1](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and#3_2_1_Each_subsystem_generally_needs_its_own_sensory_processor)), which can sense biologically-relevant cues like pain status, hunger status, taste inputs, the sight of a slithering snake, the smell of a potential mate, and so on. All these things and more can be possible bases for overruling the Thought Assessor, i.e., setting the red-circled signal to a different value than the blue-circled one. Interestingly (and unlike in textbook RL), in the big picture, *the blue-circled signal doesn’t have a special role in the algorithm*, as compared to the other Thought Assessors. It’s just one of many inputs to the Steering Subsystem’s hardwired algorithm for deciding what to put into the red-circled signal. The blue-circled signal might be an *especially important* signal in practice, weighed more heavily than the others, but ultimately everything is in the same pot. In fact, my longtime readers will recall that last year I was writing posts that *omitted* the blue-circled value signal from the list of Thought Assessors! I now think that was a mistake, but I retain a bit of that same attitude. 6.5 Decisions involve not only simultaneous but also sequential comparisons of value ==================================================================================== Here’s a “simultaneous” model of decision-making, as described by [*The Hungry Brain* by Stephan Guyenet](https://www.amazon.com/Hungry-Brain-Outsmarting-Instincts-Overeat/dp/125008119X) in the context of studies on lamprey fish: > Each region of the pallium [= lamprey equivalent of cortex] sends a connection to a particular region of the striatum, which (via other parts of the basal ganglia) returns a connection back to the same starting location in the pallium. This means that each region of the pallium is reciprocally connected with the striatum via a specific loop that regulates a particular action…. For example, there’s a loop for tracking prey, a loop for fleeing predators, a loop for anchoring to a rock, and so on. Each region of the pallium is constantly whispering to the striatum to let it trigger its behavior, and the striatum always says “no!” by default. In the appropriate situation, the region’s whisper becomes a shout, and the striatum allows it to use the muscles to execute its action.  > > I endorse this as *part* of my model of decision-making, but only part of it. Specifically, this is one of the things that’s happening when the Thought Generator generates a thought. Indeed, my diagram in Section 6.3.4 above takes obvious inspiration from the model above. Different *simultaneous* possibilities are being compared. The other part of my model is comparisons of *sequential* thoughts. You think a thought, and then you think a different thought (possibly very different, or possibly a refinement of the first thought), and the two are implicitly compared (by the Steering Subsystem picking a ground-truth value based on the temporal dynamics of Thought Assessors jumping up and down, for example), and if the second thought is worse, it gets weakened such that a new thought can replace it (and the new thought might be the first thought re-establishing itself). I could cite experiments for the sequential-comparison aspect of decision-making (e.g. Figure 5 of [this paper](https://arxiv.org/abs/1912.07660), which is arguing the same point as I am), but do I really need to? Introspectively, it’s obvious! You think: “Hmm, I think I’ll go to the gym. Actually, what if I went to the café instead?” You’re imagining one thing, and then another thing. And I don’t think this is is a humans-vs-lampreys thing. My hunch is that comparisons of sequential thoughts is universal in vertebrates. As an illustration of what I mean: 6.5.1 Made-up example of what comparison-of-sequential-thoughts might look like in a simpler animal --------------------------------------------------------------------------------------------------- Imagine a simple, ancient, little fish swimming along, navigating to the cave where it lives. It gets to a ~~fork in the road,~~ ummm, “fork in the kelp forest”? Its current navigation plan involves continuing left to its cave, but it also has the option of turning right to go to the reef, where it often forages. Seeing this path to the right, I claim that its navigation algorithm reflexively loads up a plan: “I’m will turn right and go to the reef.” Immediately, this new plan is evaluated and compared to the old plan. If the new plan seems worse than the old plan, then the new thought gets shut down, and the old thought (“I’m going to my cave”) promptly reestablishes itself. The fish continues to its cave, as originally planned, without skipping a beat. Whereas if instead the new plan seems *better* than the old plan, then the new plan gets strengthened, sticks around, and orchestrates motor commands. And thus the fish turns to the right and goes to the reef instead. (In reality, I don’t know much about little ancient fish, but rats at a fork in the ~~road~~ maze are known to imagine both possible navigation plans in succession, based on measurements of hippocampus neurons—[ref](https://doi.org/10.1523/JNEUROSCI.3761-07.2007).) 6.5.2 Comparison-of-sequential-thoughts: why it’s necessary ----------------------------------------------------------- In my view, thoughts are complicated. To think the thought “I will go to the café”, you’re not just activating some tiny cluster of dedicated go-to-the-café neurons. Instead, it’s a distributed pattern involving practically every part of the cortex. You can’t simultaneously think “I will go to the café” and “I will go to the gym”, because they would involve different activity patterns of the same pools of neurons. They would cross-talk. Thus, the only possibility is thinking the thoughts in sequence. As a concrete example of what I have in mind, think of how a [Hopfield network](https://en.wikipedia.org/wiki/Hopfield_network) can’t recall twelve different memories simultaneously. It has multiple stable states, but you can only explore them sequentially, one after the other. Or think about grid cells and place cells, etc. 6.5.3 Comparison-of-sequential-thoughts: how it might have evolved ------------------------------------------------------------------ From an evolutionary perspective, I imagine that comparison-of-sequential-thoughts is a distant descendent of a very simple mechanism akin to the [run-and-tumble mechanism in swimming bacteria](https://en.wikipedia.org/wiki/Run-and-tumble_motion). In the run-and-tumble mechanism, a bacterium swims in a straight line (“runs”), and periodically changes to a new random direction (“tumbles”). But the trick is: when the bacterium’s situation / environment is getting *better*, it tumbles *less* frequently, and when it’s getting *worse*, it tumbles *more* frequently. Thus, it winds up moving in a good direction (on average, over time). ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028196/mirroredImages/qNZSBqLEh4qLRqgWW/hqslvkijczqphak6lxre.png)Starting with a simple mechanism like that, one can imagine adding progressively more bells and whistles. The palette of behavioral options can get more and more complex, eventually culminating in “every thought you can possibly think”. The methods of evaluating whether the current plan is good or bad can get faster and more accurate, eventually involving learning-algorithm-based predictors as in [the previous post](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and). The new behavioral options to tumble *into* can be picked via clever learning algorithms, rather than randomly. Thus, it seems to me that there’s a smooth path all the way from something-akin-to-run-and-tumble to the intricate, finely-tuned, human brain system that I’m talking about in this series. (Other musings on run-and-tumble versus human motivation: [1](https://www.biorxiv.org/content/10.1101/2021.04.15.439955v2), [2](https://nathanieltravis.com/2022/01/17/is-human-behavior-just-elaborate-running-and-tumbling/).) 6.6 Common misconceptions ========================= 6.6.1 The distinction between internalized ego-syntonic desires and externalized ego-dystonic urges is unrelated to Learning Subsystem vs. Steering Subsystem ------------------------------------------------------------------------------------------------------------------------------------------------------------- *(See also: my post*[*(Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations)*](https://www.lesswrong.com/posts/cTSLLzK4MwiCyWtwt/brainstem-neocortex-base-motivations-honorable-motivations)*.)* Many people (including me) have a strong intuitive distinction between [ego-syntonic drives](https://en.wikipedia.org/wiki/Egosyntonic_and_egodystonic) that are “part of us” or “what we want”, versus [ego-dystonic drives](https://en.wikipedia.org/wiki/Egosyntonic_and_egodystonic) that feel like urges which intrude upon us from the outside. For example, a food snob might say "I love fine chocolate", while a dieter might say "I have an urge to eat fine chocolate". ### 6.6.1.1 The explanation I like I would claim that these two people are basically describing the same feeling, with essentially the same neuroanatomical locations and essentially the same relation to low-level brain algorithms. But the food snob is *owning* that feeling, and the dieter is *externalizing* that feeling. These two different self-concepts go hand-in-hand with two different “higher-order preferences”: the food snob *wants to want to eat fine chocolate*while the dieter *wants to not want to eat fine chocolate*. This leads us to a straightforward psychological explanation for why the food snob and dieter conceptualize their feelings differently: * The food snob finds it *appealing* to think of “the desire I feel for fine chocolate” as “part of who I am”. So he does. * The dieter finds it *aversive* to think of “the desire I feel for fine chocolate” as “part of who I am”. So he doesn’t. ### 6.6.1.2 The explanation I don’t like Many people (including Jeff Hawkins, see [Post #3](https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and)) notice the distinction described above, and separately, they endorse the idea (as I do) that the brain has a Learning Subsystem and Steering Subsystem (again see [Post #3](https://www.lesswrong.com/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and)). They naturally suppose that these are the same thing, with “me and my deep desires” corresponding to the Learning Subsystem, and “urges that I don’t identify with” corresponding to the Steering Subsystem.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673028196/mirroredImages/qNZSBqLEh4qLRqgWW/e3fmoxiwqo0acdykqwgn.png)Most people I talk to, including me, have separate concepts in our learned world-models for “me” and “my urges”. I claim that these concepts did *NOT* come out of veridical introspective access to our own neuroanatomy. And in particular, they do *not*correspond respectively to the Learning & Steering Subsystems.I think this model is wrong. At the *very* least, if you want to endorse this model, then you need to reject approximately everything I’ve written in this and my previous four posts. In *my*story, if you’re trying to abstain from chocolate, but also feel an urge to eat chocolate, then: * You have an urge to eat chocolate because the Steering Subsystem approves of the thought “I am going to eat chocolate right now”; AND * You’re trying to abstain from chocolate because the Steering Subsystem approves of the thought “I am abstaining from chocolate”. (Why would the Steering Subsystem approve of the latter? It depends on the individual, but it’s probably a safe bet that social instincts are involved. I’ll talk more about social instincts in [Post #13](https://www.alignmentforum.org/posts/5F5Tz3u6kJbTNMqsb/intro-to-brain-like-agi-safety-13-symbol-grounding-and-human). If you want an example with less complicated baggage, imagine a lactose-intolerant person trying to resist the urge to eat yummy ice cream right now, because it will make them feel really sick later on. The Steering Subsystem likes plans that result in not feeling sick, and *also* likes plans that result in eating yummy ice cream.) 6.6.2 The Learning Subsystem and Steering Subsystem are not two agents ---------------------------------------------------------------------- Relatedly, another frequent error is treating either the Learning Subsystem or Steering Subsystem by itself as a kind of independent agent. This is wrong on both sides: * The Learning Subsystem cannot think any thoughts unless the Steering Subsystem has endorsed those thoughts as being worthy of being thunk. * Meanwhile, the Steering Subsystem does not understand the world, or itself. It has no explicit goals for the future. It’s just a relatively simple, hardcoded input-output machine. As an example, the following is *entirely possible*: 1. The Learning Subsystem generates the thought *“I am going to surgically alter my own Steering Subsystem”*. 2. The Thought Assessors distill that thought down to the “scorecard”. 3. The Steering Subsystem gets the scorecard and runs it through its hardcoded heuristics, and the result is: “Very good thought, go right ahead and do it!” Why not, right? I’ll talk more about that example in later posts. If you just read the above example, and you’re thinking to yourself “Ah! This is a case where the Learning Subsystem has outwitted the Steering Subsystem”, then *you’re still not getting it*. (Maybe instead try imagining the Learning Subsystem & Steering Subsystem as two interconnected gears in a single machine.) 1. **[^](#fnref49k95mpuwpr)** As in [the previous post](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and), the term “ground truth” here is a bit misleading, because sometimes the Steering Subsystem will just *defer* to the Thought Assessors. 2. **[^](#fnrefg0mylfbpkdp)**As in the [previous post](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and), I don’t *really* believe there is a pure dichotomy between “defer-to-predictor mode” and “override mode”. In reality, I’d bet that the Steering Subsystem can partly-but-not-entirely defer to the Thought Assessor, e.g. by taking a weighted average between the Thought Assessor and some other independent calculation.
3a190cab-0705-410e-b713-c506bff6101a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Introducing AI Alignment Inc., a California public benefit corporation... We are a new nonprofit grassroots organization getting started in the space at https://alignmentproblem.org with a focus on the defense industry and the nascent psychotherapeutic approach to LLM fine-tuning. As a fun way to get started on this forum, here is what Bing got out of it... ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Kw8vaTc7FwaLZuyi4/osknthvfjde3dzuguy11)
d31151e4-a654-4586-83c5-810b200831b4
StampyAI/alignment-research-dataset/blogs
Blogs
Index of Transcripts Volunteers at [MIRI Volunteers](http://mirivolunteers.org/) and elsewhere have helpfully transcribed several audio/video recordings related to MIRI’s work. This post is a continuously updated index of those transcripts. All transcripts of Singularity Summit talks are available [here](http://intelligence.org/singularitysummit/). Other available transcripts include: * *Philosophy Talk*: [Turbo-Charging the Mind](https://docs.google.com/document/d/13EYHVI9KBte28-TMn3PN4QyT_8aZtnQs8AGFBH20Qp0/pub "Transcript") (with Anna Salamon) * *BloggingHeads.tv*: [Eliezer Yudkowsky and Scott Aaronson on superintelligence and many-worlds](https://docs.google.com/document/d/1JIqzTGNvdLukR0Ce5T2eiv_vxtPO53N3KGCbxVvREtU/pub) * *BloggingHeads.tv*: [Eliezer Yudkowsky and Massimo Pigliucci on consciousness and uploading](https://docs.google.com/document/d/1S-7CWOLOtLRDmMiS7LtVxELssUi3OI1-UcrPAzGMuH4/pub) * *AGI 2011:* [Whole Brain Emulation, as a platform for creating safe AGI](https://docs.google.com/document/d/1-2A_cHiFC8fmeWHdQBeM7ynWaFdbklPm9u-UtKxZJ0A/pub) (with Anna Salamon) * *AGI 2011:* [Risk-averse preferences as an AGI safety technique](https://docs.google.com/document/d/1HF0aK2-nyulheAYpOyZ1Xat-PzZPpjt5BCmzChUCVKc/pub) (with Anna Salamon) * *Dawn or Doom*: [Why ain’t you rich?](https://docs.google.com/document/d/1dRUHMKRLNngor-e-NNJD9NtDqGI4QAzQqfty1EjjIpA/edit?usp=sharing) (with Nate Soares) The post [Index of Transcripts](https://intelligence.org/2013/07/25/index-of-transcripts/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
a66488c9-34ec-46ec-b077-6812d060471c
trentmkelly/LessWrong-43k
LessWrong
The Case for The EA Hotel Epistemic Status: I strongly believe all the things I’m writing here. These are mostly heuristics and mental models rather than hard data, which I think is necessary for a project so young. I’m trying to make a strong case for the EA hotel, not a balanced one (although it will probably be balanced by the $100 on the line for articles taking the opposite view). The EA Chasm There’s something broken about the pipeline for both talent and projects in the EA Community. There’s a space in which there’s a lot of talented people in EA who want to do good, and there’s a lot of people with ideas about projects that could do good. Finally, projects like Charity Entrepreneurship seem to indicate that there’s no shortage of ways to do good. What’s missing is a way to go from a talented EA—with no evidence behind your project, no previous projects under your belt, and little status within the EA community—to someone who has enough capital to prove that their project has merit. This gap exists for a number of reasons, including strong risk aversion in the EA community, a lack of diversity in grant decision making processes, and a lack of manpower to vet hundreds of projects for the small amount of money they would need to prove themselves enough to move up to the “projects with strong evidence” category. A number of solutions have also been proposed to fill in this gap, including an EA projects evaluation platform and a suggestion for EAs to work on Non-EA projects in order to get a good track record and higher status (and thus be able to be hired or get grants). However, both of these suggestions miss out on one of the big reasons the chasm needs to be filled—strong vetting is nice, but there’s no replacement for simply trying many things and seeing what works. Why The Chasm Matters This Chasm is a big deal for the community. Organizations like CEA can work to guide the community towards a better future, and organizations like Charity Entrepreneurship can slowly work to all
c8a6a4a1-c3e8-459a-9751-36f77b9fad22
trentmkelly/LessWrong-43k
LessWrong
LW 2.0 Open Beta Live The LW 2.0 Open Beta is now live; this means you can create an account, start reading and posting, and tell us what you think. Four points: 1) In case you're just tuning in, I took up the mantle of revitalizing LW through improving its codebase some time ago, and only made small amounts of progress until Oliver Habryka joined the project and put full-time engineering effort into it. He deserves the credit for the new design, and you can read about his strategic approach here. 2) If you want to use your current LW account on LW2.0, we didn't import the old passwords, and so you'll have to use the reset password functionality. If your LW account isn't tied to a current email, send a PM to habryka on lesswrong and he'll update the user account details on lesserwrong. He's also working on improving the site and sleeping and things like that, so don't expect an immediate response. 3) During the open beta there will be a green message in the bottom right hand corner of the screen. This is called Intercom, and is how you can tell us about issues with the site and ask other questions. 4) The open beta will end with a vote of users with over a thousand karma on whether we should switch the lesswrong.com URL to point to the new code and database. If this succeeds, all the activity from the open beta and the live site will be merged together. If the vote fails, we expect to archive LW until another team comes along to revive it. We currently don't have a date set, but this will be announced a week in advance.
9e9384f7-6a35-4b11-9bc7-dd9480949094
trentmkelly/LessWrong-43k
LessWrong
Subversion Strategy Eval: Can language models statelessly strategize to subvert control protocols? We recently released Subversion Strategy Eval: Can language models statelessly strategize to subvert control protocols?, a major update to our previous paper/blogpost, evaluating a broader range of models (e.g. helpful-only Claude 3.5 Sonnet) in more diverse and realistic settings (e.g. untrusted monitoring). Abstract An AI control protocol is a plan for usefully deploying AI systems that prevents an AI from intentionally causing some unacceptable outcome. Previous work evaluated protocols by subverting them using an AI following a human-written strategy. This paper investigates how well AI systems can generate and act on their own strategies for subverting control protocols whilst operating statelessly (i.e., without shared memory between contexts). To do this, an AI system may need to reliably generate effective strategies in each context, take actions with well-calibrated probabilities, and coordinate plans with other instances of itself without communicating.  We develop Subversion Strategy Eval, a suite of eight environments, covering a range of protocols and strategic capabilities, and six sets of affordances that help isolate individual capabilities. We implement the evaluation in Inspect-AI and release it open-source. We evaluate Claude 3.5 models, including helpful-only versions, as well as OpenAI reasoning models. None of the models demonstrate substantial capability in strategizing to subvert control protocols statelessly. However, providing models with additional affordances, such as the ability to share a plan between contexts, can substantially improve performance.  We hope our evaluations can act as a leading indicator for when models are capable of subverting control protocols and also relax the worst-case assumption of perfect strategic ability in AI control evaluations. Paper overview   Previous control work has evaluated control protocols by stress-testing them using LLM agents executing actions based on strategies devised by a human red