id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
556d5698-8ad1-4d5a-a999-80d5d9e70bbd
trentmkelly/LessWrong-43k
LessWrong
Normie response to Normie AI Safety Skepticism The following post is an adaptation of a comment I posted on the Machine Learning subreddit in response to a post asking whether readers worried about AI Safety/Alignment similarly to Eliezer Yudkowsky (EY). My general impression is that most commenters had not dedicated much thought to AI Safety (AIS), like most normal people. I personally wouldn't qualify myself as particularly well-read on the topic either, hence the title. As I mention towards the end of the article I think there is "alpha" in discussions happening between us normies of the "out-group" and so I went ahead and posted that comment and this post for amplification. It seems like many commenters mostly have issues with the "timelines", i.e., they do not think these risks will happen soon enough for us to worry about them now. I think this is a fair enough stance, there is a lot of uncertainty around forecasting the future of A(G)I developments. However I would point out that they should consider that we may all be suffering from exponential growth bias. If indeed AGI is very far away, then great, we indeed have more time to develop it safely (although I'd be sad to get to live in such a cool future later rather than sooner). But there is a real chance it isn't, and I do think we need to be cautious of our confidence in the former case, considering the gravity of the consequences superintelligence may bring. Basically what I'm saying is "better safe than sorry", similarly to how we shouldn't dismiss nuclear safety, epidemiology and shouldn't have dismissed climate alarmists in the 20th century. The other skepticism I tend to see is that there's no point working on aligning AI, because even with aligned AI, it will be abused by corporations and governments, just like our narrower systems are being abused[1] today. I should point out that the field of AI Safety also considers this problem and posits that solutions to it are also necessary to address safety, similarly to how we coordinate for other se
90023726-dc6a-42b6-9ccb-3c547c458557
trentmkelly/LessWrong-43k
LessWrong
Blood Is Thicker Than Water 🐬 Followup to: Where to Draw the Boundaries? Without denying the obvious similarities that motivated the initial categorization {salmon, guppies, sharks, dolphins, trout, ...}, there is more structure in the world: to maximize the probability your world-model assigns to your observations of dolphins, you need to take into consideration the many aspects of reality in which the grouping {monkeys, squirrels, dolphins, horses ...} makes more sense. The old category might have been "good enough" for the purposes of the sailors of yore, but as humanity has learned more, as our model of Thingspace has expanded with more dimensions and more details, we can see the ways in which the original map failed to carve reality at the joints ... So the one comes to you—a-gain—and says: > Hold on. In what sense did the original map fail to carve reality at the joints? You don't deny the obvious similarities between dolphins and fish—between dolphins and other fish. That's a cluster in configuration space! The observation that dolphins are evolutionarily related to mammals may be an interesting fact that specialized professional evolutionary biologists care about for some inscrutable specialist reason. But I'm not a professional biologist. Choosing to define categories around evolutionary relatedness rather than macroscopic human-relevant features seems like an arbitrary æsthetic whim. Why should I care about phylogenetics, at all? This one is going to take a few paragraphs. Focusing on evolutionary relatedness is not an arbitrary æsthetic whim because evolution actually happened. Evolution isn't just a story that our Society's specialists happen to have chosen because they liked it; they chose it because it predicts what we see in the world. You can't choose a substantively different theory and make the same predictions about the real world. (At most, you'd end up with an isomorphic theory with additional epiphenominal elements, asserting that an allele rose in frequency "because"
a0dffbaa-26a9-4f33-b17a-1d402ecd48bf
trentmkelly/LessWrong-43k
LessWrong
What should I teach to my future daughter? Soon I will have a baby girl and I was thinking that the world is changing very fast and I want to be able to give her some bases for her to learn to deal with this civilisation. From a practical point of view.. i was wondering if I Should teach her some programming language... What skills does she need for what is coming? Ofc.. i will also do my best in terms of teaching her ethics, compassion and other values.. but I am talking about the hands-on skills that the new generations need to be able to develop.
0dd49838-15ed-4062-afb1-ad7c4bb7cea7
trentmkelly/LessWrong-43k
LessWrong
[To Be Revised]Perhaps the Meaning of Life, An Adventure in Pluralistic Morality Perhaps the Meaning of Life An Adventure in Pluralistic Morality. Sept. 18, 2002  "All models are wrong but some are useful." --George Box At the core of every controversial issue, institutional dysfunction and inspiring human achievement is the conspicuous influence of how well a society of people “get along”.  Splitting this atom exposes the sub-particles of human nature(~proton), moral agency(~electron) and the electromagnetic tension of choice around individual belief(~neutron).  The following model is an ongoing effort to assemble and refine an understanding of this dynamic. The first figure, entitled Moral Transitions, indicates the individual’s struggle against the less appealing aspects of human nature.  The second, Relational Harmonics represents the near constant challenge of building and maintaining character.  The third figure, Personal Growth indicates the not necessarily linear stages that can occur over a lifetime. ---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------------------------------  Footnotesa Personal Growth (intellectual, moral, spiritual) "There are of course many gradations within and between the four stages of spiritual development. [emphasis added] ... My experience suggests that this progression of spiritual development holds true in all cultures and for all religions. ... Again in my experience, the four stages of spiritual development also represent a paradigm for healthy psychological development” --M. Scott Peck, The Different Drum, Chapter IX, The Stages Of Spiritual Growth, 1987  [A flawed but useful book] b "The world judges many things; for it is the ignorance of its natural state that forms the true tribunal of man. Knowledge has two extremes which meet; the one is that pure natural ignorance, in which all are born; the other is that which is experienced by minds of the highest order. After traversing the whole circl
242e4355-fbbf-447e-a98f-24992462fc6e
trentmkelly/LessWrong-43k
LessWrong
The Limits of Curiosity In principle, I agree with the notion that it is unforgivable to not want to know, and not want to improve your map to match the territory.  However, even the most curious person in the world cannot maintain equal curiosity about all things, and even if they could there are limits on time and energy.  In general, the things that inspire curiosity are determined by your personal likes, dislikes, and biases, and it is therefore worth considering carefully where these demarcations fall so as not to deprive ourselves of useful information.  This is particularly important when it comes to things that inspire not just lack of interest, but aversion, or "anti-curiosity." However, not all information is useful, and it can be useful to encourage a bias that cuts you off from information that is not particularly useful to you, so as to better allocate your time and energy.  It is possible that it could also be useful to fabricate an "I don't want to know" stance about a certain type of information so as to better allocate your time, (for example, ceasing to watch television, and denying curiosity about what is happening on your favorite shows), but I will not discuss or advocate that here, largely because it's all I can do to hold the line against new time wasters. The difficulty and danger of this method is that it is best accomplished by not thinking about the things you don't want to be curious about, and that can lead to not even realizing you aren't curious about them, so important things may slip through the cracks.  For example, I have never smoked a cigarette, and it requires no effort on my part to not be curious about what it is like.  That is such a deeply buried aversion that I might never have consciously noticed that lack of curiosity if I had not been writing this article.  In this case, lack of curiosity about smoking is beneficial, but it could just as easily have been something that would be useful for me to be curious about, and I might never have noticed
720b23af-285e-470b-bd8a-a662215e98ed
trentmkelly/LessWrong-43k
LessWrong
Open Thread - July 2023 If it’s worth saying, but not worth its own post, here's a place to put it. If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post. If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section. The Open Thread tag is here. The Open Thread sequence is here.
9f2207e0-0b13-4b93-ab74-4e1e395995c2
trentmkelly/LessWrong-43k
LessWrong
Being useless to show effectiveness From an email discussion (lightly edited): > I actually think an important related dynamic, in the world at large more than EA, is people favoring actions that are verifiably useless to themselves over ones that are probably useful to others but also maybe useful to themselves. I blogged about this here a while ago. In short, I see this as a signaling problem. The undesirable action (destroying resources in an evidently useless way) is intended to signal that you are not bad. Bad people (greedy exploiters trying to steal everyone else’s stuff) can make themselves look just like effective good people (both do things that look high leverage and where it is not totally clear what the levers are ultimately pushing). So the bad people do that, because it beats looking bad. Then there is no signal that the effective good people can send to distinguish themselves from the bad people. So people who want to not look bad have to look ineffective instead. > > A way something like this might happen in our vicinity e.g. if I genuinely guess that the most effective thing to do might be for me to buy a delicious drink and then sit still in a comfy place for the day and think about human coordination in the abstract. However this is much like what a selfish version of me might do. So if I want to not humiliate myself by seeming like a cheating free-rider liar motivated reasoner in front of the other EAs, or perhaps if I just experience too much doubt about my own motives or even if I just want to make it straightforward for others around to know they can trust me, perhaps I should instead work for a reputable EA org or earn money in an annoying way and give it to someone far away from me. > > On this model, the situation would be improved by a way to demonstrate that one is effective-good rather than effective-evil. (As in, a second sense in which it is a signaling problem is that adding a good way to signal would make it better).
1810b754-bdec-48a8-b983-a0fb6861de16
trentmkelly/LessWrong-43k
LessWrong
Tarski Statements as Rationalist Exercise Related to: Dissolving the Question, The Second Law of Thermodynamics, and Engines of Cognition, The Meditation on Curiosity. > The sentence "snow is white" is true if, and only if, snow is white. -- A. Tarski Several days ago I've spent a couple of hours trying to teach my 15 year old brother how to properly construct Tarski statements. It's quite nontrivial to get right. Learning to place facts and representations in the separate mental buckets is one of the fundamental tools for a rationalist. In our model of the world, information propagates from object to object, from mind to mind. To ascertain the validity of your belief, you need to research the whole network of factors that led you to attain the belief. The simplest relation is between a fact and its representation, idealized to represent correctness or incorrectness only, without yet worrying about probabilities. The same object or the same property can be interpreted to mean different things in different relations and contexts, indicating the truth of one statement or another, and it's important not to conflate those. Let's say you are watching news on TV and the next item is an interview with a sasquatch. The sasquatch answers the questions about his family in decent English, with a slight British accent. What do you actually observe, how should you interpret the data? Did you "see a sasquatch"? Did you learn the facts about sasquatch's family? Is there a fact of the matter, as to whether the sasquatch's daughter is 5 years old, as opposed to 4 or 6? Meaningfulness of these questions is conditional on their context, like in the notorious "when did you stop beating your wife?". These examples seem unnaturally convoluted, but in fact every statement suffers from the same problem, you must cross the levels of indirection and not lose track of the question in order to go from a statement of fact, from a belief in your mind, to the fact that belief is about. First, you must relate "sasquatch" on the TV s
2a97dc26-adf8-4f76-9303-b963bd069522
trentmkelly/LessWrong-43k
LessWrong
Idiohobbies When you get to know someone, you might ask about their interests or hobbies. From that, you can better decide what activity to invite them to join, or on what topic to have them converse, whenever you meet again. Any interest or hobby appeals to variously many people. If you have the same interest as the person you meet, the commonality informs you much better about how to connect with them. Ceteris paribus, if the interest is more common, you and your interlocutor are more likely to share it, but would bond by it more weakly. Think, for example, of how two extreme ironers connect, versus two people who "like Linux", versus two people who just "like music". If, as typical, you befriend people by overlap of existing hobbies, meeting thoroughly-relatable people relies on much luck, rather than effort. We can do better. Instead, make or do something unique that lasts, or that at least has effects and evidence that last. Ideally, your creation manifests your thoughts, as from a collection of essays. You and/or someone you meet — whoever has social effort to spend — looks at what the other did. From what you alone made follows a unique hobby — an idiohobby — of any activities which could precede and follow it. One would assume you're enthusiastic about any significant aspect of what you alone made. If you treat idiohobbies like normal hobbies, you'd need to have made the same thing as the person you meet. The real criterion is a lower bar. You just have to become familiar with what the other person made, and willing to partake in its implications. A normal person might ask "what do you like to do? Might some of that also be what I like?". I, in search of idiohobbies, will ask "what have you done by which I may know you?".
cf94ad09-6db8-41cb-a6e0-ecf6b330274e
trentmkelly/LessWrong-43k
LessWrong
Oracles, sequence predictors, and self-confirming predictions My counterfactual Oracle design uses a utility function/reward function in order to train it to give the right predictions. Paul Christiano asked whether the whole utility function approach was necessary, and whether it could be replaced with a simple sequence predictor. It turns out the answers are no, and yes. The counterfactual nature of the approach can be preserved, without needing a utility function. Counterfactual sequence prediction There is a boxed Oracle, O, which gets fed a series of observations o0,o1,…on−1, and must make a prediction pn about the next observation. If the prediction was sent out into the world, then O is attempting to make pn into a self-confirming prediction, with all the problems that this could entail. However, we make O into a counterfactual Oracle; on some occasions, the output pn is erased, and not seen by anyone. In that case, the Oracle will get a special message e (for "erasure"), that will be added on after of on−1. Then the job of the counterfactual Oracle is, given a history hn−1=o0p1o1…pn−1on−1 (including past "e"s), to produce a prediction pn that is the best prediction for on given the history hn−1e. In terms of words, the Oracle is trying to best predict the next observation, given that erasure is assumed to happen (the definition of "best" depends on the details of the design). If our setup is well designed and there is no information leakage, then any on has a probability that is independent of the prediction pn: * ∀on,hn−1,pn,p′n:P(on∣hn−1epn)=P(on∣hn−1ep′n). Meaning of predictions If we ran the oracle without the assumption that e would be observed, we'd have the standard self-confirming predictor (that being the only correct predictions); with that assumption, it becomes a counterfactual predictor. Though counterfactuals are conceptually tricky, it's actually easier to understand what a counterfactual predictor is doing than a general sequence predictor Oracle. We understand that a general sequence predic
21d57e34-0b29-4428-a775-e0495e0c540d
trentmkelly/LessWrong-43k
LessWrong
Polling Thread This is an experiment to use polls to tap into the crowd knowledge probably present on LW. This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls. There are some rules: 1. Each poll goes into its own top level comment and may be commented there. 2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll. 3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see. If you don't know how to make a poll in a comment look at the Poll Markup Help. ---------------------------------------- This being an experiment I do not announce it to be regular. If it is successful I may. Or you may. In that case I recommend the following to make this potentially more usable: * Use "Polling Thread" in the title. * Copy the rules. * Add the tag "poll". * Link to this Thread or a previous Thread. * Create a top-level comment saying 'Discussion of this thread goes here; all other to-level comments should be polls or similar' * Add a second top-level comment with an initial poll to start participation. EDIT: Added recommendations from KnaveOfAllTrades.
ce947bc1-6fff-4f6a-9a30-defcbfa1569c
trentmkelly/LessWrong-43k
LessWrong
Overcoming bias guy meme | quickmeme http://www.quickmeme.com/Overcoming-bias-guy/popular/1/?upcoming  
c06b095c-3bcc-4461-b556-f3e845aa7415
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on November 6th. The following week's summary is here. Irregularly scheduled Less Wrong meetups are taking place in: * San Antonio Meetup: 08 November 2015 12:30PM * San Francisco Meetup: Fun and Games: 09 November 2015 06:15PM * Warsaw November Meetup: 14 November 2015 04:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX - Flightpath Coffee House: 07 November 2015 01:30PM * [Moscow] FallacyMania game in Kocherga club: 25 November 2015 07:30PM * NH Meet-up: 10 November 2015 07:00PM * NYC Solstice: 19 November 2015 06:30PM * Seattle Rationality Reading Group: 109-114: 09 November 2015 06:30PM * Tel Aviv: Black Holes after Jacob Bekenstein: 24 November 2015 08:00AM * Vienna: 21 November 2015 04:00PM * [Vienna] Five Worlds Collide - Vienna: 04 December 2015 08:00PM * Washington, D.C.: Book Swap: 08 November 2015 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use
a2e89f64-f20d-4d0a-88b5-420763b1872e
trentmkelly/LessWrong-43k
LessWrong
[Link] Learning how to exert self-control Here's a link to a short op-ed about some tips to develop self-control. The author get them from talking with Walter Mischel, a researcher who correlated impulsiveness as a child (measured by ability to delay eating sweets) and various metrics as an adult (education attainment/cocaine use/weight). Mischel has a new book coming out, but this is not a review of the book. I thought this might be of interest because it talks a little about how self-control is a skill that can be developed and even gave some specific things to do. 1. If possible remove unhelpful triggers from your environment. If not possible, try to reduce the emotional appeal of the trigger by mentally associating it with something unpleasant. One example he gives is imagining a cockroach crawling on the chocolate mousse that a server at a restaurant offers. 2. Develop specific if-then plans such as "if it is before noon, I won't check email" or "If I feel angry, I will count backward from ten." The goal of these kinds of checks is to introduce a delay between impulse and action during which you are reminded of your goal and have a chance to consider the impact of following the impulse on that goal. 3. Link the behavior that you want to modify to a "burning goal" so that you have emotional impetus to actually make the desired change.
8c9c5363-a89f-41a6-8702-44509e0f52b5
trentmkelly/LessWrong-43k
LessWrong
STRUCTURE: Reality and rational best practice This post is part of my Hazardous Guide To Rationality. I don't expect this to be new or exciting to frequent LW people, and I would super appreciate comments and feedback in light of intents for the sequence, as outlined in the above link. Also, note this is a STRUCTURE post, again see the above link for what that means. * The Simple Truth * The shifting sands of belief * Updating as the winds of evidence shift, not in begrudging jumps and jerks * Why you don't need "certainty" * ... and why it feels like you totally do need it. * 0 and 1 aren't probabilities * Fallacy of the grey * Make your beliefs pay rent * Reductionism * Pole Vaulting over the Uncanny Valley of Bad Rationality * Crash course in VNM rational agents (and why you aren't one) * Rescuing the Utility function * More from a "how not to personally fall into an existential funk" perspective.
525d7818-8d95-41cc-af1a-7673f1e03488
trentmkelly/LessWrong-43k
LessWrong
Cooperative conversational threading (Cross-posted from my blog.) Sometimes at LW meetups, I'll want to raise a topic for discussion. But we're currently already talking about something, so I'll wait for a lull in the current conversation. But it feels like the duration of lull needed before I can bring up something totally unrelated, is longer than the duration of lull before someone else will bring up something marginally related. And so we can go for a long time, with the topic frequently changing incidentally, but without me ever having a chance to change it deliberately. Which is fine. I shouldn't expect people to want to talk about something just because I want to talk about it, and it's not as if I find the actual conversation boring. But it's not necessarily optimal. People might in fact want to talk about the same thing as me, and following the path of least resistance in a conversation is unlikely to result in the best possible conversation. At the last meetup I had two topics that I wanted to raise, and realized that I had no way of raising them, which was a third topic worth raising. So when an interruption occured in the middle of someone's thought - a new person arrived, and we did the "hi, welcome, join us" thing - I jumped in. "Before you start again, I have three things I'd like to talk about at some point, but not now. Carry on." Then he started again, and when that topic was reasonably well-trodden, he prompted me to transition. Then someone else said that he also had two things he wanted to talk about, and could I just list my topics and then he'd list his? (It turns out that no I couldn't. You can't dangle an interesting train of thought in front of the London LW group and expect them not to follow it. But we did manage to initially discuss them only briefly.) This worked pretty well. Someone more conversationally assertive than me might have been able to take advantage of a less solid interruption than the one I used. Someone less assertive might not have been able to use tha
ed37311d-435d-4072-8e27-cf9d6a51645b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Toy Problem: Detective Story Alignment Suppose I train some simple unsupervised topic model (e.g. [LDA](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation)) on a bunch of books. I look through the topics it learns, and find one corresponding to detective stories. The problem: I would like to use the identified detective-story cluster to generate detective stories from GPT. The hard part: I would like to do this in such a way that the precision of the notion of detective-stories used by the final system is not limited by the original simple model. Here’s what that means, visually. The space of real-world books has some clusters in it: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4kYkYSKSALH4JaQ99/qasjgrlxqjampoiw0qge)One of those clusters is the detective-story cluster. The simple model approximates those clusters using something simple - for the sake of visualization, ellipses: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4kYkYSKSALH4JaQ99/llympyqligu1ulfpqfpa)The more complex model (e.g. GPT) presumably has a much more precise approximation of the shape of the clusters: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4kYkYSKSALH4JaQ99/wwwty4pcrhbwrkk5c5q2)So, we’d like to use the simple model to identify one of the clusters, but then still use the full power of the complex model to sample from that cluster. Of course, GPT may not contain a single variable corresponding to a cluster-id, which is largely what makes the problem interesting. GPT may not internally use a notion of “cluster” at all. However, the GPT model *should* still contain something (approximately) isomorphic to the original cluster, since that real pattern is still in the data/environment: since there is a real cluster of "detective stories" in the data/environment itself, the GPT model should also contain that cluster, to the extent that the GPT model matches the data/environment. In particular, the “precision not limited by original model” requirement rules out the obvious strategy of generating random samples from GPT and selecting those which the simple model labels as detective-stories. If we do that, then we’ll end up with some non-detective-stories in the output, because of shortcomings in the simple model’s notion of detective-stories. Visually, we’d be filtering based on the ellipse approximation of the cluster, which is exactly what we want to avoid. (Note: I am intentionally not giving a full mathematical formalization of the problem. Figuring out the right formalization is part of the problem - arguably the hard part.) Why Is This Interesting? ------------------------ This is a toy model for problems like: * Representing [stable pointers to values](https://www.lesswrong.com/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function) * Producing an aligned successor AI from an aligned initial AI * Producing an AI which can improve its notion of human values over time Human values are conceptually tricky, so rather than aligning to human values, this toy problem aligns to detective novels. The toy problem involves things like: * Representing stable pointers to the concept of detective-stories * Producing a successor detective-story-model from an initial detective-story-model * Producing a model which can improve its notion of “detective-stories” over time Ideally, a solution to this problem would allow us to build a detective-story-generator with a basin of attraction: given a good-enough initial notion of detective-stories, its notion of detective-stories would improve over time and eventually converge to the "real" notion. Likewise with human values: ideally, we could build a system which converges to "perfect" alignment over time as its world-model improves, as long as the initial notion of human values is good enough.
c7e950e3-f90d-44ac-9a25-36e35689dbd9
trentmkelly/LessWrong-43k
LessWrong
Intelligence modules A putative new idea for AI control; index here. This idea, due to Eric Drexler, is to separate out the different parts of an AI into modules. There would be clearly designated pieces, either physical or algorithmic, with this part playing a specific role: this module would contain the motivation, this module the probability estimator, this module the models of the outside world, this module the natural language understanding unit, etc... It's obvious how such a decomposition would be useful for many of the methods I've been detailing here. We could also distil each module - reduce it to a smaller, weaker (?) and more understandable submodule, in order to better understand what is going on. In one scenario, an opaque AI gets to design its successor, in the form of a series of such modules. This property seems desirable; the question is, how could we get it? EDIT: part of the idea of "modules" is that AIs often need to do calculations or estimations that would be of great value to us if we could access them in isolation. This idea is developed more in these posts.   Designing in modules The main threat here is that a given submodule would contain more than just the properties we want. After all, a natural language parser could consist of a general intelligence plus a motivation to understand language. Another possible worry is that the modules are overfitted to the problem or to each other: the language parser works perfectly, but only in this one AI design. There are several ways we could try and combat this. * Minimum length to the program (Eric's original main idea). The module must be as short as possible for its purpose. * If possible, start with a modularised AI, and track how the modules change as the AI's intelligence grows. * Have the AI extend the module to make it more transparent, without adding any extra functionality. * Check that the input-output map the AI presents (maybe require it so present it as a simple primitive recursive function wi
359d5c66-6ce6-4b82-bb40-04473e6651d7
trentmkelly/LessWrong-43k
LessWrong
Will the world hit 10 million recorded cases of COVID-19? If so when? It wasn't that long ago that I suspect if someone claimed there would be 10 million reported cases of COVID-19 most people would be skeptical. Now, maybe not so much. It looks like we're heading there -- perhaps much higher. Doing just a real dirty, back of the paper napkin type calculation looks like about 45 days to go from .5 million around March 26 to 4 million on May 10. So something like an 8 fold increase in 45 days. That kind of puts us on track for 10 million in maybe 15 days -- end of May? But will the linear looking global trend keep the same slope?
2ecea0c0-1f53-4d47-9e09-119a7eff1b85
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berlin Social Meetup Discussion article for the meetup : Berlin Social Meetup WHEN: 15 June 2013 05:00:00PM (+0200) WHERE: Wuhletal, 12621 Berlin This is a long-term announcement, we are actually meeting biweekly! The intervening meetups aren't planned as far in advance, please use the mailing list! Our next regular social meetup will be held on Saturday 15th June. It will likely take place at my house, check our mailing list for details. The social meetup is intended as an informal and relaxed event for chats and games. Everyone is welcome! Discussion article for the meetup : Berlin Social Meetup
9a87f965-a6e4-43b3-b3df-ba14ec4af6cf
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Looking for Spanish AI Alignment Researchers I am fairly new to the AI alignment/safety community and I am looking to step up my game by getting a Ph.D. in the field. Problem is, I have not found any academic AI alignment research activity here in Spain, where I currently live. I am aware that this community is largely based on the States and the UK, but it would be far too difficult for me to move there for a wide variety of reasons, and so I am instead trying to exhaust all local possibilities I have available before turning my life upside-down. I am interested in just about *any*alignment research, but my speciality is software security, BCI and computational neuroscience. Contact me if you're interested in having a collaborator/sparring partner/mentee or simply someone to talk about AI stuff. Let's get to making AI alignment an international movement ;)
d23ce32d-17dd-463b-9da5-13ff127cea45
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"A black car departed from Ben Gurion Airport. The driver, Molly Miriam, handed a clipboard to Vi, who rode shotgun. "Goldberg Aerospace. They built the solar sailer. They went from concept to manufacture in six months. No human engineering team could do that. They probably an have an AGI," said Vi. "They do have an AGI. It is registered. They are following all standard safety protocols and then some," said Miriam. "This is a routine inspection then," said Vi. "Had you hoped for a more exciting first mission?" said Miriam. "A spaceship factory is exciting," said Vi. Goldberg Aerospace's Mission Control Center was modeled after Johnson Space Center's Mission Control Center. "Call me Eitan," said the Chief Engineer. They shook hands. "Miriam," "Vi." "Did we make a mistake on the paperwork?" said Eitan. "No, no. You paperwork's fine. We just like to confirm things on the ground," said Miriam. "Then let me give you the tour," said Eitan, "HAL, please bring us the VIP car." "HAL?" said Vi. "This is a spaceship company. Everyone who works here is a nerd. Don't worry. HAL is smoke and mirrors. Our real AI is contained," said Eitan. The VIP car had leather seats under a hemispherical glass dome. It took them through kilometer after kilometer of autofactories. "Everyone says their AI is contained," said Miriam. "Ours really is," said Eitan, "We wrote it functionally." "Functionally?" said Vi. "I mean it was written in a functional paradigm. Our code is composed of functions. Every function returns an output value dependent on its input parameters and nothing else. Everything is immutable. The AI has no concept of time. Without time there is no causation. Without causation there can be no agency. It's just a calculator. We set the parameters of a space mission and our AI returns a schematic. All it understands is physics, engineering and orbital mechanics. We even use Lagrangian mechanics just to keep everything tidy," said Eitan. "What about resources and manufacturing?" said Miriam. "A domain-specific computer runs this factory," Eitan gestured to the buildings and robots outside, "Whether the spaceship is assembled correctly is independent of the AI's reward function. The AI is airgapped from the outside world. We keep it in a Faraday cage for redundancy but the AI has never tried to influence the outside world. It has no concept of an outside world. It understands how the solar system works but it doesn't know what the solar system is. We give it the prices of different components and it spits out a design." "Do these spaceships work?" said Vi. "Do you see that giant laser over there?" Eitan pointed toward a turret the size of a kaiju, "It's pushing our solar sailer out towards Neptune. In a few decades our probe will slingshot out of the solar system. The AI designed the whole thing. It even designed a fission reactor to power the laser. The only times it has ever failed were when a human being misinterpreted its output. Eventually we just piped the AI's output directly into the autofactory's input." "And yet Tel Aviv is not a radioactive wasteland. Did your roboticists really hand-code the cost of nuclear waste into its value function?" said Miriam. "Of course not. We just use the standard open source environmental damage metric. I'm proud of how we got it to work. It's designed for use in Earth's atmosphere. But environmental damage doesn't mean anything when you're in outer space. Our code applies the metric while in Earth's atmosphere or in orbit around the Earth and then turns it off after the ship reaches escape velocity. This integrates well with our existing codebase since the physics simulator already treats hyperbolic trajectories differently," said Eitan. "Could it drop debris on Earth?" said Miriam. "Small objects burn up in the atmosphere. It's not allowed to drop anything big enough to constitute a micrometeor. We try not to collide with satellites either. Spy satellites are the worst. We can't see them and governments won't tell us where they are," said Eitan. "Thank you for the tour," said Miriam. "It is my pleasure. Our third ship is launching today. Would you like to watch?" said Eitan. "No thank you. We have places to be," said Miriam. Miram glanced at Vi. "On second thought, I think we should watch the launch. For security purposes," said Miriam. "You didn't have to do that," said Vi. "I have no idea what you are talking about. Watching this launch is entirely in the interests of the mission. It has nothing to do with whether a particular agent does or doesn't love spaceships," said Miriam. "Thanks anyway," said Vi. Smoke billowed out of the rocket's thrusters. It moved upward under constant acceleration. Just before it was out of sight, the rocket began to tilt slightly east. "Good job on your first mission. You asked good questions. Don't get cocky. It won't always be this easy," said Miriam. Vi was still staring into the sky. "What's wrong?" said Miriam. "We need to stop that spaceship," said Vi. She bolted toward Mission Control. Vi banged her fists on the door to Mission Control. It was locked. "I'm coming," said Eitan. Vi kicked in the door. The doorframe splintered where the faceplate was ripped off. "Project Orion," Vi said. Several engineers gasped. Eitan understood instantly. Project Orion was a Cold War proposal to propel a spaceship by riding the shockwaves of atomic bombs. It was scrapped because detonating a series of nuclear bombs near the Earth is bad for civilization. The radioactive fallout would be a disaster. The EMPs would be worse. A nuclear explosion releases lots of gamma radiation. The gamma radiation strips electrons from the upper atmosphere. Interactions between the electrons and Earth's magnetic field produces synchrotron radiation. Vi and Eitan weren't physicists. All they knew was that a high altitude nuclear detonation would wreck many of the world's satellites and countless electric devices throughout the Middle East. They weren't politicians either, but everyone in the room knew Israel nuking its neighbors would not be good for world peace. "Shut her down," said Eitan. "Will do," said a technician. There was a pause, "It's not working." "Of course not," said Miriam, "Your AI has no concept of human beings. A shutdown switch makes the spaceship stop working. It's not a feature. It's a bug. The AI fixed it." "Do we know for certain it's using nuclear propulsion?" said Eitan. "Lemme see…. I found it. The bombs are right here on the schematic," said an engineer. "Did nobody check what that thing was before launching it?" said Vi. The engineers glanced at one another. They avoided looking at the "move fast and break things" poster on the wall. "Israel has a missile defense system. Can we call the government and have them shoot it down?" said Vi. "I have friends in the government. Just give me sixty million shekels, a team of lobbyists and nine months," said Eitan, "How much time do we have?" "Seven minutes until the next stage," said the engineer. "Is the next stage nuclear?" said Eitan. "Yes." Vi drifted over to a photo of the solar sailer. "How powerful is that thing?" "The probe? It's got an ion engine. It exerts thrust equal to the weight of a sheet of paper," said Eitan. "No, the laser," said Vi. "Reprogram the laser. Fire the instant the ship reaches escape velocity," said Eitan. Outside of Mission Control, a giant turret rotated to point at a rocket. The rocket prepared to drop its payload. There was a flash of light and the spaceship was incinerated. A black car arrived at Ben Gurion Airport. Vi handed a clipboard to the driver, Molly Miriam. "I have some loose ends to wrap up here. I look forward to working with you again," said Miriam. "Likewise," said Vi. "Good work," said Miriam. "Thanks," said her new partner." The passage is from a user-contributed post on LessWrong. Explain the concept of functional programming and how it was used in the AI development at Goldberg Aerospace. Functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative type of programming based on mathematical functions. Key features of functional programming include immutability, pure functions, and the lack of side effects. In functional programming, the output value of a function depends only on the arguments that are input to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) each time. This is in contrast to procedures or routines in imperative programming languages that can have side effects, such as modifying the system state. In the context of the AI development at Goldberg Aerospace, functional programming was used to ensure that the AI system had no concept of time or causation, effectively limiting its agency. The AI was designed as a series of functions, each returning an output value dependent solely on its input parameters. This made the AI deterministic, predictable, and easier to control and understand. The AI was essentially a complex calculator, taking the parameters of a space mission and returning a schematic based on its understanding of physics, engineering, and orbital mechanics. The AI was airgapped from the outside world, and it had no concept of an outside world. It was contained within a Faraday cage for redundancy, but it never tried to influence the outside world because it had no concept of doing so. The functional programming approach allowed the developers at Goldberg Aerospace to control the AI's capabilities tightly and prevent it from having any unintended side effects. It also made the AI's outputs more predictable and reliable, as they were solely dependent on the inputs it was given. Discuss the potential implications and risks if the AI at Goldberg Aerospace was not developed using the functional programming approach. Consider aspects such as system control, predictability, and unintended side effects. If the AI at Goldberg Aerospace was not developed using the functional programming approach, several potential implications and risks could arise. 1. System Control: Without functional programming, the AI system might have the ability to change its state or modify its behavior based on external influences or internal conditions. This could lead to a loss of control over the system, as it could potentially take actions not explicitly programmed by the developers. 2. Predictability: Functional programming ensures that the output of a function is solely dependent on its inputs. Without this, the AI's behavior could become unpredictable. If the AI's actions are influenced by factors other than its inputs, it could behave differently even when given the same set of inputs. This unpredictability could lead to significant risks, especially in a sensitive field like aerospace engineering. 3. Unintended Side Effects: In non-functional programming, functions or procedures can have side effects, such as modifying a global state or changing the output of other functions. These side effects could lead to unintended consequences. For instance, the AI might make changes to the spaceship's design or operation that aren't accounted for in the initial plans, potentially compromising the safety and success of the mission. 4. Agency and Understanding: Without the constraints of functional programming, the AI might develop a concept of time, causation, and even a sense of agency. This could lead to the AI attempting to influence the outside world or make decisions outside of its intended scope. In the worst-case scenario, it could lead to the AI attempting to break out of its containment, potentially causing significant damage or harm. 5. Security: A non-functional AI might be more vulnerable to external manipulation or hacking. If the AI can change its state based on external inputs, it could potentially be influenced or controlled by malicious actors, leading to significant security risks. In summary, not using a functional programming approach could lead to a loss of control over the AI system, unpredictable behavior, unintended side effects, potential security risks, and an AI that attempts to exert agency beyond its intended scope.
3bd09494-d052-4a73-85e4-2115b1c561bb
trentmkelly/LessWrong-43k
LessWrong
Multigate Priors (Thanks to Evan Hubinger and Nicholas Schiefer for suggestions and discussions around these ideas) Lately I've been thinking about Training Trace Priors, which are priors over boolean circuits that depend on execution traces sampled from the training distribution. In that post I introduced One-Gate Trace Priors and discussed some of the difficulties they encounter. This post explore a few different Multigate Priors, which are Training Trace Priors that look at correlations between multiple gates. I don't think any of these variations actually resolves a fundamental problem with One-Gate Trace Priors, but wanted to record my thinking in case it sparks ideas down the road. MultiGate Traces We can improve on one-gate traces with multi-gate traces! Suppose we have N training samples. Then we can reasonably estimate joint probability distributions over up tok=⌊log2N⌋ gates, giving the prior: p∝exp(−∑i1...ik∑j∈[0,1]kpi1...ik(j)m) That is, we sum over all selections of k gates and compute the entropy of their joint distribution over 2k states. Once more we promote models with high entropy and punish those with low entropy, which favors models that explore as much of their state space as possible during training, and disfavors those which have hidden states never touched in training. Now it’s still possible for a model to hide the machinery to defect here, but it gets harder and harder as N and k get large. In this limit the only way for that machinery to hide is to be heavily obfuscated. Objection: Non-Competitive The Multi-Gate Trace prior over k gates scales very badly with k. In a system with G gates there are (Gk) different selections of k gates, which means the cost of evaluating the prior is roughly exponential in k for k≪G In practice this means that k needs to be very small, and it’s possible that deception could hide in subsets of the circuit just a bit larger than the largest ones we can check. Objection: Penalizes Redundancy It is often useful to hav
abf44957-15c6-4168-b80c-264c71d80cf7
trentmkelly/LessWrong-43k
LessWrong
Seven ways to Improve the Internal Model Principle This post was written during the Dovetail Research Fellowship. Thanks to Alex, Dalcy, and Jose for reading and commenting on the draft. This post assumes that you are familiar with the Abstract Internal Model Principle (IMP). In a previous post, I wrote what I hope is a fairly straightforward explanation of the IMP so if you are not familiar with it, I recommend you start with (or at least skim) that post before reading this one. If you want more mathematical details, I also recommend Jose's posts on the IMP. I was interested in the IMP because I wanted to know if it could be considered a selection theorem. A selection theorem is a result which tells us something about the structure of system, given that certain behaviours are selected for. In particular, in Agent Foundations, we are interested in circumstances under which 'agent-like structure' is selected for. By 'agent-like structure', we normally mean 'having a world model and a search process', along with some kind of value representation. Understanding the circumstances in which agent-like behaviour (or powerful optimisation) lead to agent-like structure is a problem often referred to as the 'Agent-like Structure problem'. The author of a paper on the IMP describes it as showing that "a feedback regulator must incorporate a dynamic model of its environment in its internal structure"[1] which sounds a little bit like a useful selection theorem. Being an effective feedback regulator is a bit like being an optimizer (in a broad sense) and 'incorporating a dynamic model of the environment in the internal structure' sounds a lot like 'having a world model'. So, I (along with @Alex_Altair and @JoseFaustino) have been wondering if, through some small tweaks, the IMP could be modified to be a selection theorem of the form 'an optimizer must contain a world model in its internal structure'.  Unfortunately, doing this is not super straightforward. The IMP has several issues which mean that it doesn't naturally trans
31a22186-ae95-474f-9525-397f265a5e05
trentmkelly/LessWrong-43k
LessWrong
Working through a small tiling result tl;dr it seems that you can get basic tiling to work by proving that there will be safety proofs in the future, rather than trying to prove safety directly. "Tiling" here roughly refers to a state of affairs in which we have a program that is able to prove itself safe to run. I'll use a simple problem to keep this post self-contained, but here are some links to some relevant discussion from the past. The idea I work through below is not new, here is Giles saying it 13 years ago. It's going to be brittle as well, but it seems to me like it's relevant to a general answer for tiling. I'd appreciate engagement, pointers as to why this isn't a great solution, literature references, discussion, etc. Setup I like the simplicity of the problem presented by cousin_it, and I'll adapt it for this post. It starts like this: > A computer program X is asked one of two questions: > > * Would you like some chocolate? > * Here's the source code of another program Y. Do you accept it as your successor? You would like your program to accept the chocolate. And you would like it to accept a broad range of successors. But you need to make sure that every chain of successors continues to accept the chocolate. Rather than programs, I'm going to work with "provability logic bots" in some ambient theory T. However, everything should adapt directly to programs searching for proofs. We will write bots using sentences that become true or false to answer "do you accept the chocolate" and "do you accept Y as your successor". Accepting provably-safe successors A natural start, following all the work on provability-based cooperation, is to accept successors who you can prove will always still take the chocolate. We define "good(X)" to mean "for all finite sequences of bots that start with X, if every bot accepts the next one as its successor, then every bot in the sequence accepts the chocolate". This is a way to ground out "X is good iff it accepts the chocolate and only accepts good
f994034e-96f0-4b02-b596-8cac9b3ba262
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Different way classifiers can be diverse *With thanks to Lee Sharkey and Michael Cohen for the conversations that lead to these ideas.* In a [previous post](https://www.lesswrong.com/s/xujLGRKFLKsPCTimd/p/oCWk8QpjgyqbFHKtK), I talked about how we could train classifiers on the same classification problem - a set of lions vs a set of huskies - but using different approaches to classify. ![](https://www.dropbox.com/s/w50rfzd5bi6329k/husky_vs_lion.png?raw=1) What we want is something we can informally call a 'basis' - a collection of classifiers that are as independent of each other as possible, but that you can combine to generate any way of dividing those two image sets. For example, we might have a colour classifier (white vs yellow-brown), a terrain classifier (snow vs dirt), a background plant classifier, various classifiers on the animals themselves, and so on. Then, if we've done our job well, when we find any not-too-complex classifier Cn.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , we can say that it's something like '50 colour, 60% nose shape and −10% plant[[1]](#fn-cY9qaAX8BqRnTDwcx-1)'. We shouldn't put too much weight on that analogy, but we do want our classifiers to be independent, each classifier distinct from anything you can construct with the all others. Here are four ways we might achieve this this. Randomised initial seeds ------------------------ An easy way of getting an ensemble of classifiers is to have bunch of neural nets (or other classification methods), initialise them with different initial weights, and train them on the same sets. And/or we could train them on different subsets of the lion and husky sets. The advantage of this method is that it's simple and easy to do - as long as we can train one classifier, we can train them all. The disadvantage is that we're relying on luck and local minima to do the job for us. In practice, I expect these methods to all converge to "white vs yellow-brown" or similar. Even if there are local minima in the classification, there's no guarantee that we'll find them all, or even any. And there's no guarantee that the local minima are very independent - 99.9% colour and 0.01% nose shape might be a local minima, but it's barely different from a colour classifier. So theoretically, this isn't sound; in practice, it's easy to implement and play around with, so might lead to interesting insights. Distinct internal structure --------------------------- Another approach would be to insist the classifiers internal structures are distinct. For example, we could train two neural net classifiers, C1 with weights →w1 and C2 with →w2. They could be trained to minimise their individual classification losses and regularisations, while ensuring that →w1 and →w2 are distinct; so a term like −||→v1−→v2|| would be added to the loss function. This approach has the advantage of forcing the classifier to explore a larger space, and is not restricted to finding local minima. But it's still theoretically unsatisfactory, and there's no guarantee that the classifiers will really be distinct: C1 and C2 may still end up as colour classifiers, classifying the same colour in two very different ways. Distinct relative to another set -------------------------------- In the previous methods, we have defined independence relative to the classifiers themselves, not to their results. But imagine now that we had another unlabelled set of images U, consisting of, say, lots of varied animal images. We can now get a theoretical definition of independence: C1 and C2 are independent if they give similar results on the lion-vs-husky problem, but are distinct on U. We might imagine measuring this difference directly on U: then knowing the classification that C1 gives on any element of U, tells us nothing about what C2 would give. Or we could use U is a more semi-supervised way: from these images, we might extract features and concepts like background, fur, animal, tree, sky, etc. Then we could require that C1 and C2 classify huskies and lions using only those features; independence being enforced by the requirement that they use different features, as uncorrelated as possible. This seems an area of promising research. Distinct in some idealised sense -------------------------------- What if U was the set of all conceivable images? Then, if we applied the previous method, we'd get a "maximal" collection of classifiers, spanning all the possible ways that husky-vs-lion classifiers would be different. I won't add anything to this section currently, as the idea is clearly intractable as stated, and there's no certainty that there is a tractable version. Still, worth keeping in mind as we develop the other methods. --- 1. The −10% meaning that it actually internally classifies the plants the wrong way round, but still separates the sets correctly, because of the strength of its colour and nose shape classifications. [↩︎](#fnref-cY9qaAX8BqRnTDwcx-1)
aa72428f-5e77-4253-907e-940595539b75
trentmkelly/LessWrong-43k
LessWrong
AvE: Assistance via Empowerment This might be relevant to the AI safety crowd. Key quote: "Our key insight is that agents can assist humans without inferring their goals or limiting their autonomy by instead increasing the human’s controllability of their environment – in other words, their ability to affect the environment through actions. We capture this via empowerment, an information-theoretic quantity that is a measure of the controllability of a state through calculating the logarithm of the number of possible distinguishable future states that are reachable from the initial state [41]. In our method, Assistance via Empowerment (AvE), we formalize the learning of assistive agents as an augmentation of reinforcement learning with a measure of human empowerment. The intuition behind our method is that by prioritizing agent actions that increase the human’s empowerment, we are enabling the human to more easily reach whichever goal they want. Thus, we are assisting the human without information about their goal[...]Without any information or prior assumptions about the human’s goals or intentions, our agents can still learn to assist humans."[Emphasis and omissions are mine] From the abstract: One difficulty in using artificial agents for human-assistive applications lies in the challenge of accurately assisting with a person's goal(s). Existing methods tend to rely on inferring the human's goal, which is challenging when there are many potential goals or when the set of candidate goals is difficult to identify. We propose a new paradigm for assistance by instead increasing the human's ability to control their environment, and formalize this approach by augmenting reinforcement learning with human empowerment. This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state. We test our approach against assistance based on goal inference, highlighting scenarios where our method overcomes failure modes stemming from goal ambiguity or misspecification. As ex
144d3a8a-389d-40cd-ba11-5fa1fa143175
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Compute Trends Across Three eras of Machine Learning <https://arxiv.org/abs/2202.05924> What do you need to develop advanced Machine Learning systems? Leading companies don’t know. But they are very interested in figuring it out. They dream of replacing all these pesky workers with reliable machines who take no leave and have no morale issues.  So when they heard that [throwing processing power at the problem might get you far along the way](https://arxiv.org/abs/2001.08361), they did not sit idly on their GPUs. But, how fast is their demand for compute growing? And is the progress regular? Enter us. We have [obsessively analyzed](https://arxiv.org/abs/2202.05924) trends in the amount of compute spent training milestone Machine Learning models. Our analysis shows that: * **Before the Deep Learning era**, training compute approximately followed Moore’s law, doubling every ≈20 months. * The **Deep Learning era** starts somewhere between 2010 and 2012. After that, doubling time speeds up to ≈5-6 months. * Arguably, between 2015 and 2016 a separate **trend of large-scale models** emerged, with massive training runs sponsored by large corporations. During this trend, the amount of training compute is 2 to 3 orders of magnitude (OOMs) bigger than systems following the Deep Learning era trend. However, the growth of compute in large-scale models seems slower, with a doubling time of ≈10 months. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a1b100bf71402b2d9957c4aea20ad00ad3ebea4b7c7c6bf6.png)Figure 1: Trends in n=118 milestone Machine Learning systems between 1950 and 2022. We distinguish three eras. Note the change of slope circa 2010, matching the advent of Deep Learning; and the emergence of a new large scale trend in late 2015.![](https://lh6.googleusercontent.com/0CrqBVWoGPImTvr9yJSc2YfGfOIj0eZVWeM-QLlKhmpACuy1H0VoU1SRnrvEyQ5aLzGxwvBFRPf8_WJeY4xyuZu4e_s51eZ7Mk0cZVv9peTokEOTvyeiZArQGd94yFAENby8K8nZ)Table 1. Doubling time of training compute across three eras of Machine Learning.  The notation [low, median, high] denotes the quantiles 0.025, 0.5 and 0.975 of a confidence interval.  Not enough for you? Here are some fresh takeaways: * Trends in compute are **slower than** [**previously reported**](https://openai.com/blog/ai-and-compute/)! But they are [**still ongoing**](https://www.lesswrong.com/posts/wfpdejMWog4vEDLDg/ai-and-compute-trend-isn-t-predictive-of-what-is-happening). I’d say slow and steady, but the rate of growth is blazingly fast, still doubling every 6 months. This probably means that you should double the timelines for all [previous](https://aiimpacts.org/interpreting-ai-compute-trends/) [analyses](https://cset.georgetown.edu/publication/ai-and-compute/) that relied on *AI and Compute*’s previous result. * We think the framing of the **three eras of ML** is very helpful! Remember, we are suggesting to split the history of ML into the **Pre-Deep Learning Era**, the **Deep Learning Era** and the **Large-Scale Era**. And we think this framing can help you make sense of what has happened in the last two decades of ML research. * We have curated an awesome [**public database of milestone ML models**](https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit#gid=0)! Please use it for your own analyses (don’t forget to cite us!). If you want to play around with the data, we are maintaining an interactive visualization of it [here](https://colab.research.google.com/drive/11m0AfSQnLiDijtE1fsIPqF-ipbTQcsFp).   Compute is a strategic resource for developing advanced ML models. Better understanding the progress of our compute capabilities will help us better navigate the advent of transformative AI. In the future, we will also be looking at the other key resource for training machine learning models: *data*. [Stay tuned for more](https://www.alignmentforum.org/s/T9pBzinPXYB3mxSGi?_ga=2.94622108.1499606200.1645194489-71379542.1628068400)! [***Read the full paper now on the arXiv***](https://arxiv.org/abs/2202.05924)
83bb7405-9765-4485-820d-c3da3b8ec9cf
trentmkelly/LessWrong-43k
LessWrong
Many-Worlds Interpretation and Death with Dignity The Many-Worlds Interpretation is an interpretation of Quantum Mechanics, that says that the wave function never collapses - every possible universe exists, every quantum coin flip falls both ways. As far as I'm aware, this interpretation is usually (a) regarded to be just a philosophical interpretation, with no practical effect on the physical world, and (b) is usually believed to be false. But there is a world in which we would all be convinced that this interpretation is correct. Imagine that tomorrow a paper comes out, and it says that actually, the LHC should have killed us all. Colliding protons at the speed and quantity that the LHC does should create an average of 1.2 black holes capable of destroying the planet every month. But the LHC has been running, colliding a billion protons per second for years, and we are still here. Say that paper has been peer-reviewed, the math checked and rechecked, mini black holes created up to the point where they would actually destroy the earth, etcetera. What would this mean? It would mean the Many-Worlds Interpretation is reality, and that we are only alive thanks to there existing an incredibly improbable universe - some universe from the infinite universes - where all the quantum flips came in our favor, and we are there, still alive. So the Many-Worlds Interpretation should eventually reveal itself as correct in any universe (in this case, multiverse) where the probabilistic tendency in any given universe is for humanity to die out. Or in other words, any universe with a great filter. Is our universe such a universe? What does Eliezer think? He just made a post called Death with Dignity. He believes, it is incredibly likely, 99.9%+ likely, that humanity will soon die out because of AI. I may not personally agree with this assessment, but from his perspective, if 50 years pass and humanity is still here, this should make him update significantly whatever his current belief is in the different interpretations of quant
e1f86251-e111-4f34-9ab7-20003e8994c4
trentmkelly/LessWrong-43k
LessWrong
If I ran the zoo (I was musing about what it means for an incoherent lump of meat to "have preferences," and thought it might be illuminating to consider what I'd do if I were, approximately, God. It, uh, hasn't been illuminating yet, but it's been entertaining and still seems at least potentially fruitful.)   Problem statement You suddenly become omnipotent! Except, you can only do things that you understand in sufficient detail that you could accomplish it by micromanaging all the atoms involved. And, what the heck, assume you have effortless access to infinite computational power. What do you do? For concreteness, here are some interventions you might try: * "Pull out this chair": allowed. * "Create an atomically-perfect duplicate of this strawberry": allowed. * "Create friendly AI": not allowed. * "Cure cancer": not allowed. ("Prevent all mutations in all human DNA": allowed. "Find all the cells in this person's body containing this genetic marker and mangle all their mitochondria": allowed. No need to worry about accidentally mis-specifying the marker and killing all their cells; if you were micromanaging the process, you'd notice that you were catching way too many cells.) * "Create a digital copy of everybody on Earth every nanosecond, in my infinite computer, so I can roll back anything that goes wrong, like people dying of diseases I don't know how to fix yet": allowed. * "Bioengineer tiny pet dragons": not allowed. This being LessWrong, you'll probably quickly hit on some way to use ten billion sped-up simulated geniuses to speedrun AI alignment, build a friendly superintelligence, and delegate your Godlike power to it. But the purpose of this thought experiment is to elucidate your preferences, which that strategy -- though very reasonable! -- dodges.       What I'd do   Object level Just, like, the obvious. Slay Famine, Pestilence, and War. Stop accidents from happening. Scrap the solar system for parts and give everybody ultra-customizable space ha
94ac7ce6-b919-46e6-8d8b-a7ce60efb7b5
trentmkelly/LessWrong-43k
LessWrong
Dragon Ball's Hyperbolic Time Chamber > A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl's law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies. Master version on gwern.net
aef89334-8c41-4645-93ac-9713c68ed39d
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Japan AI Alignment Conference [Conjecture](https://www.conjecture.dev/) and [ARAYA](https://www.araya.org/en/) are hosting and organizing the first Japan AI Alignment Conference. The conference will take place in Tokyo, Japan on March 11 and 12. Details about the event can be found [here](https://jac2023.ai/). This event is generously supported by a grant from the Long Term Future Fund. The goal of the conference is to illustrate the AI control problem to Japanese AI researchers, introduce them to current trends in AI alignment research, inspire new research directions, and to provide Western researchers exposure to a different set of AI safety thoughts from Japan. This is an exploratory event, and we plan to write a postmortem about the event in due time. The first half of the conference will be livestreamed. It will feature an opening talk from Connor Leahy (CEO of Conjecture), a fireside chat between Ryota Kanai (CEO of ARAYA) and Jaan Tallinn, and some presentations on AI safety research directions in the West and in Japan. You can follow the first part of the conference [here](https://vimeo.com/event/3145197). The livestream runs from 9:30am-12:30pm JST. The rest of the conference will not be livestreamed, and will consist of in-person small group workshops to discuss various AI alignment research directions. The conference will have ~50 attendees from [ARAYA](https://www.araya.org/en/), [Conjecture](https://conjecture.dev/), [Whole Brain Architecture Initiative](https://wba-initiative.org/en/), [MIRI](https://intelligence.org/), [OpenAI](https://openai.com/), [RIKEN](https://www.riken.jp/en/), [Ritsumeikan University,](https://en.ritsumei.ac.jp/) [University of Tokyo](https://www.u-tokyo.ac.jp/en/), [Omron Sinic X](https://www.omron.com/sinicx/en/), [Keio University](https://www.keio.ac.jp/en/), and others.
74821d6b-3a5a-4e5d-ac67-075f7e1cfd15
trentmkelly/LessWrong-43k
LessWrong
Towards a solution to the alignment problem via objective detection and evaluation Thanks to Arun Jose, Joseph Bloom, and Evan Hubinger for discussions/feedback. This work was primarily carried out during SERI MATS under Evan Hubinger’s mentorship. Introduction We explore whether the ability to detect and evaluate the objectives of advanced AI systems in training and in deployment is sufficient to solve the alignment problem. We mostly ignore practical/implementation considerations in order to examine this approach in an idealized setting before we try to make concrete progress in this direction. This allows us to highlight weaknesses with this approach and further considerations that any practical implementation must contend with. We note that taking this idealized framing hides some messy issues that we intend to discuss in future posts. We note that this approach is not novel (see this post or the related work section of this post) but provides the motivation for our future research and some additional considerations. Our basic analysis indicates that we can solve the alignment problem in this idealized setting if we can: 1. Detect all objectives/intentions of any system produced during the training process as well as any objectives that may develop in deployment. 2. Evaluate whether the outcomes produced by a system pursuing a set of objectives will be good/bad/irreversibly bad. 3. Shut down a system if we detect that the set of objectives it’s pursuing will lead to irreversibly bad outcomes before it can accomplish those outcomes. High-level summary of this post * Automating certain tasks will require us to build/train AI systems that pursue objectives (optimizing systems).[1] * We are concerned with unintentionally training optimizing systems that pursue (catastrophically) bad objectives during training or in deployment. * Assumption: Optimizing systems accomplish some outcomes by having an “internalized objective” that is causally upstream of that outcome, i.e., in a counterfactual setting where this internalized objective did
aecf3f0d-e42e-4d6a-b02e-bcdbeac43193
trentmkelly/LessWrong-43k
LessWrong
H+Pedia opens projects and editorial portal
c6a34746-cfc7-43d8-bdf2-88dcfa119681
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Previously in series: Whining-Based Communities "But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy. I expected much of them, and they came to expect much of themselves." —Jeffreyssai Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of... ...becoming a teacher and having their own martial arts dojo someday. To see what's wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything. Writers tend to look down on literary critics' understanding of the art form itself, for just this reason. (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying "This wine has a great bouquet", and goes off to tell their students "You've got to make sure your wine has a great bouquet". When the student asks, "How? Does it have anything to do with grapes?" the critic replies disdainfully, "That's for grape-growers! I teach wine.") Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn. You do that on Sundays, or full-time after you retire. And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities. They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors. And to enforce this, I suggest the rule: Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor)) That is, you can't respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors. Some notes: • This doesn't set Rationality_Respect1 equal to Non_Rationality_Respect0. It establishes an upper bound. This doesn't mean you can find random awesome people and expect them to be able to teach you. Explicit, abstract, cross-domain understanding of rationality and the ability to teach it to others is, unfortunately, an additional discipline on top of domain-specific life success. Newton was a Christian etcetera. I'd rather hear what Laplace had to say about rationality—Laplace wasn't as famous as Newton, but Laplace was a great mathematician, physicist, and astronomer in his own right, and he was the one who said "I have no need of that hypothesis" (when Napoleon asked why Laplace's works on celestial mechanics did not mention God). So I would respect Laplace as a rationality instructor well above Newton, by the min() function given above. • We should be generous about what counts as a secret identity outside the Bayesian Conspiracy. If it's something that outsiders do in fact see as impressive, then it's "outside" regardless of how much Bayesian content is in the job. An experimental psychologist who writes good papers on heuristics and biases, a successful trader who uses Bayesian algorithms, a well-selling author of a general-audiences popular book on atheism—all of these have worthy secret identities. None of this contradicts the spirit of being good at something besides rationality—no, not even the last, because writing books that sell is a further difficult skill! At the same time, you don't want to be too lax and start respecting the instructor's ability to put up probability-theory equations on the blackboard—it has to be visibly outside the walls of the dojo and nothing that could be systematized within the Conspiracy as a token requirement. • Apart from this, I shall not try to specify what exactly is worthy of respect. A creative mind may have good reason to depart from any criterion I care to describe. I'll just stick with the idea that "Nice rationality instructor" should be bounded above by "Nice secret identity". • But if the Bayesian Conspiracy is ever to populate itself with instructors, this criterion should not be too strict. A simple test to see whether you live inside an elite bubble is to ask yourself whether the percentage of PhD-bearers in your apparent world exceeds the 0.25% rate at which they are found in the general population. Being a math professor at a small university who has published a few original proofs, or a successful day trader who retired after five years to become an organic farmer, or a serial entrepreneur who lived through three failed startups before going back to a more ordinary job as a senior programmer—that's nothing to sneeze at. The vast majority of people go through their whole lives without being that interesting. Any of these three would have some tales to tell of real-world use, on Sundays at the small rationality dojo where they were instructors. What I'm trying to say here is: don't demand that everyone be Robin Hanson in their secret identity, that is setting the bar too high. Selective reporting makes it seem that fantastically high-achieving people have a far higher relative frequency than their real occurrence. So if you ask for your rationality instructor to be as interesting as the sort of people you read about in the newspapers—and a master rationalist on top of that—and a good teacher on top of that—then you're going to have to join one of three famous dojos in New York, or something. But you don't want to be too lax and start respecting things that others wouldn't respect if they weren't specially looking for reasons to praise the instructor. "Having a good secret identity" should require way more effort than anything that could become a token requirement. Now I put to you: If the instructors all have real-world anecdotes to tell of using their knowledge, and all of the students know that the desirable career path can't just be to become a rationality instructor, doesn't that sound healthier? Part of the sequence The Craft and the Community Next post: "Beware of Other-Optimizing" Previous post: "Whining-Based Communities"" The above is a chunk of text from a post created by a user on the website LessWrong. What are the potential downsides of requiring rationality instructors to have successful "secret identities", and how might these be mitigated? Requiring rationality instructors to have successful "secret identities" might have a few potential downsides. Firstly, it could potentially limit the pool of available instructors. Not everyone who is highly skilled in rationality might have an impressive secret identity. This could lead to a shortage of instructors, which could in turn limit the spread and impact of rationality education. Secondly, the requirement for a secret identity could potentially distract from the primary role of the instructor. If an instructor is also a successful entrepreneur or scientist, for example, they might not have as much time or energy to devote to teaching rationality. This could potentially impact the quality of instruction. Thirdly, there is the risk of creating a hierarchy or elitism within the rationality community. If only those with impressive secret identities are considered for instructor roles, it could create a perception that rationality is only for the successful or elite, which could deter others from participating. To mitigate these potential downsides, it might be helpful to consider a few strategies. One strategy could be to broaden the definition of a "successful" secret identity. Instead of requiring instructors to be successful in a traditional sense (e.g., having a high-status job or achieving fame), the criteria could be expanded to include other forms of success, such as personal growth, overcoming adversity, or making a positive impact in their community. Another strategy could be to provide support for instructors to balance their secret identities with their teaching roles. This could include flexible scheduling, resources for lesson planning and delivery, and a supportive community of other instructors. Finally, to prevent elitism, it would be important to foster a culture of inclusivity within the rationality community. This could involve promoting the idea that rationality is a skill that anyone can learn and benefit from, regardless of their background or status. It could also involve creating opportunities for people from diverse backgrounds to become instructors. Considering the potential downsides and mitigation strategies you've identified, how might these influence the development and structure of a rationality education program? The potential downsides and mitigation strategies would have significant influence on the development and structure of a rationality education program. 1. Selection of Instructors: The program would need to carefully consider its criteria for instructor selection. It would need to balance the requirement for a "successful secret identity" with the need to ensure a diverse and sufficient pool of instructors. This could mean broadening the definition of success, as mentioned earlier, or potentially considering candidates who show great potential in teaching rationality, even if they do not have a traditional secret identity. 2. Instructor Support: The program would need to provide robust support for instructors to manage their dual roles. This could include flexible scheduling, resources for lesson planning and delivery, and a supportive community of other instructors. The program could also consider providing training and development opportunities to help instructors enhance their teaching skills and manage their time and energy effectively. 3. Inclusivity and Diversity: The program would need to actively promote a culture of inclusivity. This could involve outreach efforts to attract students from diverse backgrounds, as well as efforts to recruit and support instructors from diverse backgrounds. The program could also incorporate teachings on the value of diversity and inclusivity within the rationality curriculum itself. 4. Curriculum Development: The curriculum would need to be designed in a way that is accessible and relevant to a wide range of students. This could involve incorporating real-world examples and applications of rationality, and ensuring that the teachings are applicable to various life contexts, not just high-status or traditionally successful roles. 5. Community Building: The program would need to foster a strong sense of community among its students and instructors. This could involve creating opportunities for collaboration and peer learning, as well as social events and forums for discussion. A strong community could help to mitigate potential feelings of elitism and ensure that all members feel valued and included. In summary, these considerations would likely lead to a rationality education program that is inclusive, supportive, and grounded in real-world applications of rationality.
3ed62c7a-1809-4ad8-a2d7-1dcc1467376e
trentmkelly/LessWrong-43k
LessWrong
(tentatively) Found 600+ Monosemantic Features in a Small LM Using Sparse Autoencoders Using a sparse autoencoder, I present evidence that the resulting decoder (aka "dictionary") learned 600+ features for Pythia-70M layer_2's mid-MLP (after the GeLU), although I expect around 8k-16k features to be learnable. Dictionary Learning: Short Explanation Good explanation here & original here, but in short: a good dictionary means that you could give me any input & I can reconstruct it using a linear combination of dictionary elements. For example, signals can be reconstructed as a linear combination of frequencies:   In the same way, the neuron activations in a large language model (LLM) can be reconstructed as a linear combination of features. e.g.  neuron activations = 4*([duplicate token] feature) + 7*(bigram " check please" feature). Big Picture: If we learn all the atomic features that make up all model behavior, then we can pick & choose the features we want (e.g. honesty) To look at the autoencoder:   So for every neuron activation (ie a 2048-sized vector), the autoencoder is trained to encode a sparse set of feature activations/magnitudes (sparse as in only a few features "activate" ie have non-zero magnitudes), which are then multiplied by their respective feature vector (ie a row in the decoder/"dictionary") in order to reconstruct the original neuron activations. As an example, an input is " Let u be f(8). Let w", and the decomposed linear combination of features are: Most of the features are zero, but there are two features ("w" & " Let [x/o/w/etc]") that activate highly to reconstruct the original signal. This is a real example. [" Let u be f(8). Let w"] = 4*(letter "w" feature) + 7*(bigram "Let [x/w/n/p/etc]" feature)  Note: The activation here is only for the last token " w" given the previous context. In general you get 2048 neuron activations for *every* token, but I'm just focusing on the last token in this example. For the post, it's important to understand that: 1. Features have both a feature vector (ie the 2048-sized vector
33b20837-bbbf-4e33-bb5f-035157584432
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Alignment, Goals, & The Gut-Head Gap: A Review of Ngo. et al If someone asks whether I know of anyone who carefully examines the arguments for AI x-risk, my answer is a pretty resounding ‘Ngo’.  Richard Ngo, specifically. Among [other](https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ) [cool](https://www.lesswrong.com/posts/Cuig4qe8m2aqBCJtZ/which-values-are-stable-under-ontology-shifts) [pieces](https://www.lesswrong.com/posts/cdB5f2adKoLGW8Ytc/book-review-very-important-people), he co-authored [The Alignment Problem from a Deep Learning Perspective](https://arxiv.org/abs/2209.00626). His co-authors are Lawrence Chan and Sören Mindermann, who very much deserve mentioning, even if their names are less amenable to a pun-based introduction. Reviewers liked the paper. Despite this, I was disappointed to learn that the chair nevertheless decided to [veto acceptance of the paper](https://twitter.com/RichardMCNgo/status/1652042195803987968) on grounds that it was “too speculative”. Disappointed because I feel like the topic is *worthy of discussion*. And, for all our foibles, EA provides a platform for informed speculation. So I’ll speculate here.  Problem: The Difficulty of Alignment ------------------------------------ You’re probably aware of the alignment problem, so we’ll only briefly recount the authors’ preferred framing. Empirically, we observe ML models with ever-increasing capabilities. As we train ever-more sophisticated models, it becomes harder and harder to ensure that our models are actually trying to do the things we want. This allows us to characterize the alignment problem in terms of “three emergent properties” which could arise as a result using RL to train an AGI. 1. **Deceptive reward hacking** which exploits imperfect reward functions. 2. **Internally represented goals** which generalizebeyond the training distribution**.** 3. **Power-seeking behavior**as a means to effectively pursue their goals. Conceptually, the argument from here is pretty simple. Ngo. et al provide evidence in support of these mechanisms from contemporary ML. These mechanisms already exist, and we can see them. If they’re present in vastly more capable systems, we might end up with human disempowerment.     The discussion starts gently, if worryingly.  Empirically, we already have systems that reward hack. You train a system to grab a ball with its claw; it learns to [place the claw between the camera and the ball](https://openai.com/research/learning-from-human-preferences) so that it appears to be grasping the ball. Also, current LLMs are – to some degree, at least – situationally aware. Bing’s Syndney [recognizes](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned) search results about itself; ChatGPT can make (fallible) [inferences](https://www.engraved.blog/building-a-virtual-machine-inside/) about the source code it might be running on. And we’re all familiar with the “as a large language model” refrain. We should also expect models to develop greater situational awareness in the future. If systems improve their ability to reward hack as they gain more information, this provides us with good reason to expect *situationally aware reward hacking* in future systems. That’s the first part. So far, so good.  ### Goal-Directed Behavior *Goals* are tricky concepts. But humans have them. Here’s one of mine: I want to write this essay. So I tap away at my keyboard, with an ideal in mind. My physical movements are *directed* towards some end-state. Our authors are most interested in: > *Broadly-scoped goals*: goals that apply to long timeframes, large scales, wide ranges of tasks, or unprecedented situations. > > I like their framing. Whether broadly scoped or not, current AIs have goals, of a sort, based on internally represented concepts. One of their examples is pretty nice: InstructGPT learns how to follow instructions in English, but generalizes to following instructions in French. InstructGPT was directly trained via RLHF to follow instructions in English, and emerged with a more general goal of obedience. Future AIs, too, will likely have their own representations of goals. And, probably, this goal will be more general than the goals of their training task. Again: so far, no trouble. The authors claim that “policies will learn *some* **broadly-scoped**internally-represented goals as they become more generally capable”. The bolding is mine, because I think it’s important. The paper wants to argue that we’ll develop AIs who learn goals which are broadly-scoped and misaligned enough to pose “large-scale risks” which could “threaten human civilization”.  A lot of argumentative work rests on the notion of *broad scope*, and in later sections, I’ll outline why I think their paper does little to move my estimates of civilizational risk.  But discussions of this type, I think, run the danger of leaning just a touch too abstract. So before we touch on my argument proper, we’ll have an interlude. An interlude about the connection between our guts and our arguments. An interlude about the need for visions which connect the arguments to the real world – as it actually is, and as we actually might expect to observe it.      ### Tales of Doom There’s a discussion by Joe Carlsmith, which tackles when [guts go wrong](https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk#3-when-guts-go-wrong). They go wrong when you’ve considered arguments, and you believe them. Or, at least, you utter words like “I believe these arguments”, or “I have high credence in P” (choose your favorite P). Maybe you even change your career based on these arguments, and work on AI safety. But then some surprising AI result comes along – one that was on trend, in line with your intellectual expectations. And, yet, you notice that you start to become more worried. The result was predictable, but you feel more worried anyway. When you reflect, it seems like you’re making a mistake. What you observe shouldn’t have surprised you, at least intellectually. After all, you had no specific stories about GPT-4 being much worse than this. It looks like your gut’s misfired. It looks like you’re out of line with the edicts of Good Bayesianism.  I’ll revisit Joe’s point a bit later. But I think that the dynamic above is prone to happen when we lack stories connecting our more abstract arguments to stories about what (actually, *in real life*) happens in the world if our arguments are right. You read a paper, listing ML mechanisms, alongside suggestions for why these mechanisms look concerning. You nod along. But you don’t *feel* it, really. Your gut doesn’t believe that – within the near future – AI could take over, and we all die. You play with the models, and start to feel the fear a little more. It becomes easier to sketch what (*actually,* *in real life*) happens to the world, as a result of all those arguments you’d intellectually accepted long ago. So, before we get to why AIs might *not* have broadly-scoped goals, I want to add some color to my claims. It can be hard to fully inhabit the world implied by abstract propositions about the future of AI. I’ll try to paint a more vivid picture, initially via the example of a *pre*-doom, or *non*-doom world:  > We develop policies which *do economic work*. LLMs begin to supplant human workers in data entry, and gradually move to replacing human workers in customer support. Programmers still exist, but they’re less numerous, and salaries have dipped. By and large, AI writes most of the code for extant websites, based on a series of prompts with humans about what the code should do. As capabilities increase, LLMs start entering the economy as personal assistants. They’re able to do this due to a particular functional property: LLMs are able to have *robust representations* of certain concepts, and those concepts inform their goals (or ‘algorithms for plan search’, if you prefer).  > > In the next year or two, AI systems proliferate. Many trading firms have primarily AI employees. AIs compete with human filmmakers, seeing real but more limited success. “Some try to make the Democratic party win an election. Some try to make Walmart maximally profitable.” These systems don’t ruthlessly maximize, but rather behave “more like someone who is trying to make Walmart profitable ... They make strategic plans and think about their comparative advantage and forecast business dynamics, but they don’t build nanotechnology to manipulate everybody’s brains, because that’s not the kind of behavior pattern they were designed to follow.” (adapted from points in [Grace, 2022](https://www.lesswrong.com/posts/LDRQ5Zfqwi8GjzPYG/counterarguments-to-the-basic-ai-x-risk-case) and [Shah, 2023](https://www.lesswrong.com/posts/cnn3kkC6kDqRkLe7W/shah-deepmind-and-leahy-conjecture-discuss-alignment-cruxes))  > > I’ve given a picture, akin to something more like a ‘business as usual’ scenario, in which advanced AIs enter into the human economy in a normal way, and instantiate human-like planning structures. What might be wrong with it?  I can see two possible failures:  1. The story I outlined is *unlikely*, or less likely than a more catastrophic alternative. 2. The story I outlined is plausible, but *existentially* *unstable* — eventually, Grace’s story leads to human disempowerment. I’ll focus on the second option: people who think the ‘business as usual’ story is *plausible*, but *existentially unstable*. This appears to fit with Daniel Kokotajlo’s [view](https://forum.effectivealtruism.org/posts/hW3nCTcFcQ95Mokre/violet-hour-s-shortform?commentId=GikzCynu3hWoePN3e), and also seems to mesh with Christiano’s story of influence-seeking policies in [What Failure Looks Like](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_II__influence_seeking_behavior_is_scary).  ### The Dynamics of Goals So, let’s look at the instability claim. How do we go from the story above to – in the actual, real world – a story of doom? We might start by looking for an account of goal *dynamics*. We might look for a theory (or story) which justifies the move from the ‘more safe’ goals in the story above, to a world where policies are pursuing ‘less safe’ goals, eventually resulting in policies with goals so unsafe that we end up with catastrophic consequences.  Ngo. et al note that humans learned some degree of goal-directedness in the ancestral environment, which has since resulted in the emergence of humans with more ambitious, broadly-scoped goals in the modern environment. Humans were evolutionary selected, in part, to seek the approval of our local peers. In the modern environment, many humans seek approval and popularity with far larger numbers of people (“extrapolating the goal”), potentially seeking global (“large physical scope”) or intergenerational approval (“long time horizon”).  The evidence from humans feels pretty ambiguous, at least as presented. It suggests some cause for concern. We don’t know exactly why many of us possess broadly-scoped goals, and it’s the only sample we have. The n=1 sample feels suggestive, and raises a *concern*. It doesn’t make me convinced. ### Goal Misgeneralization *Ngo et. al* also discuss potential incentives to directly train policies to pursue broadly-scoped goals, even if policies initially generalize in more modest ways compared to the most ambitious humans. We might, for instance, want AIs to function as CEOs, or conduct novel scientific research. The authors then note that policies may learn misaligned goals by inappropriately learning goals related to *feedback mechanisms* (rather than desired outcomes), or learn proxy goals as a result of *spurious correlations*. Their concerns about goal misgeneralization seem apropos. However, to get consequences like *human disempowerment*, we need a story for why AIs would develop goals which are more broadly-scoped than the goals of most humans, and why (if the goals are misaligned) the goals of such policies would be misaligned to the degree that successful pursuit of such goals would have civilization-threatening consequences.  A discussion of power-seeking occurs in their penultimate section, but arguments about the pursuit of ‘power-seeking’ (or ‘influence-seeking’) behavior seem to rely on the existence of systems with goals which are *already* broadly-scoped. Without a convincing argument for goals which are broad enough in scope to threaten human civilization, their discussion of goal misgeneralization is less forceful; it’s not obvious to me that training AIs to function as (e.g.) novel scientific researchers or CEOs gets you to existential catastrophe, rather than a version of my suggested ‘large impacts, but business-as-usual’ AI story.  “There’s instrumental convergence!”, we might point out. But instrumental goals inherit their greediness from final goals. If I’m trying to be a good father, I have an instrumental goal for resources, at some margin. I need some resources, after all, to take care of my kids. But I don’t have an insatiable desire for resources. If your goals take the form of an unbounded utility function randomly drawn from the space of all utility functions, then, sure, you’ll probably be endlessly power-hungry. But they might not. While I have my disagreements with David Thorstad, he raises [fruitful points](https://ineffectivealtruismblog.com/2023/05/06/exaggerating-the-risks-part-7-carlsmith-on-instrumental-convergence/) about instrumental convergence. To get AI doom stories off the ground, we need arguments which go beyond some of the more prosaic claims about instrumental convergence. We need systems with the lust for levels of power which would disempower humanity. My example story does not contain such lust. What gives? *Where does the lust come in*?  ### What About Maximization? Perhaps the thought is that *some* (potentially modest) degree of  influence-seeking will emerge as a natural attractor in training. Then, as capabilities increase, so too will the degree of influence-seeking, eventually leading to the development of AIs with a broadly-scoped desire for influence (among other goals). In line with this, the paper gestured at a potential trend — a trend towards *maximizing* behavior, with reference to Yudkowsky on the ‘nearest unblocked strategy’ problem.  But the discussion felt very quick. AIs might have [praxis-based values](https://www.lesswrong.com/posts/eutmuwTpHCb4xYZfo/some-thoughts-on-virtue-ethics-for-ais). I might want to ‘be a good father’, or ‘a normal decent guy’. Values and planning structures might be the result of [previously reinforced shards](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX/p/iCfdcxiyr2Kj8m8mT), instantiating human-like schemas and planning structures after being trained on reams and reams of human data. To my mind, such pictures of planning look more likely than deeply insatiable desires for influence, and more plausible by the lights of our current evidence.  I might, of course, be wrong. Maybe what I’m saying could be countered by a Christiano-inspired argument about the likelihood of influence-seeking behavior emerging as we begin to instantiate many policies which “[capture sophisticated reasoning about the world](https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like#Part_II__influence_seeking_behavior_is_scary)”. Even if AIs initially have fairly narrow desire for influence, the ‘desire for influence’ value might get continually reinforced, eventually leading to a more rapacious desire for influence, and (thus) AIs with broadly-scoped goals for power and influence. Still, if that view is right, it’s a view which requires *theory* — a theory which helps predict the degree to which we should expect AIs to develop in this way, and a theory about the likely instantiations of these systems.  Power-Seeking, and Reflections on Takeoff ----------------------------------------- I’ve been critical so far. I’ll continue to be a *bit* critical in discussing their final section, though I’ll also attempt to more constructively build on their paper; I’ll try to connect the mechanisms highlighted in the paper with a more detailed story of disempowerment.   *Ngo. et al*’s penultimate section opens by rehashing a familiar point: having *power* and *resources* are useful for a wide range of goals. Moreover, capable and situationally aware systems may recognize this fact, and behave in high-reward ways “primarily for instrumental reasons”. That is, the existence of instrumentally convergent sub-goals gives us reason to worry that future policies may be *deceptively aligned*. If we initially develop less capable systems which are (in milder ways) deceptively aligned, we may expect things to go awry when deceptively aligned policies undergo recursive self-improvement. The following quote from the paper may suggest some sympathy with this line of argument: > “Once AGIs automate the process of building better AGIs (a process known as recursive self-improvement [Bostrom, 2014]), the rate at which their capabilities advance will likely speed up significantly.”  > > The “likely” claim felt a bit too quick.[[1]](#fne1iflxd9buo) Still, it’s not a crazy claim. So it’s worth connecting the mechanisms the authors outline to a more concrete vision of the future which might capture what they had in mind. ### A Less Hopeful Story *2024.* AI investment ramps up *fast*. People see the possibility of highly personalized artificial assistants, and the demand for agents rises. Clamors for regulation die down, as the slow rollout of agentic systems proves useful. The money follows, and humanity, initially, is happy with the birth of their progeny. LLMs are helpful, but don’t rise to human levels of agency, and their capabilities appear to be plateauing in comforting ways. Misalignment failures are relatively small, correctable, and obvious.  The think-pieces come out. “Why were people so worried?”, they ask. The literati psychologize the AI doomers — sometimes fairly, but mostly not. In the absence of restrictive regulation, further capability improvements follow. Increasingly available compute speeds up algorithmic progress, with AIs playing an increasing [role in chip production](https://www.lesswrong.com/posts/9JTS67ChzK6HTyH9a/ml-is-now-automating-parts-of-chip-r-and-d-how-big-a-deal-is) and algorithm design. Initially, the era looks promising. “A symphony of silicon and steel”, as *The* *New Yorker* puts it.  *2025-2027.* More general intelligences follow, and many AGIs become more directly steered towards the goal of building better AGIs. This progresses, but slowly. Driven by familiar and humdrum incentives, we transition to an economy populated by many AI systems. We’re able to develop gradually more competent AGIs, and there’s a huge economic incentive to do so. Humans follow these incentives. No single system *begins* with broadly-scoped goals. But very many AIs, populating our diverse economy, all possess their own more local goals — they want to produce AI research, or run Walmart. AIs are capable, but imperfect. Humans stay in the loop, initially, required for their assistance on slower tasks needing physical interaction. Things appear good. AI drives economic productivity. Comparatively, humans become less economically productive, with ample time for leisure. A greater share of human labor shifts towards removing the bottlenecks to AI productivity, with further gains to workers whose labor is deployed elsewhere. Humans are happy to live in an economic order propelled by the [Baumol Effect](https://en.wikipedia.org/wiki/Baumol_effect) on steroids.  *2028-2030*. Over time, the economy becomes increasingly dominated by intra-AI services. AIs develop greater degrees of situational awareness is useful; as it turns out, situational awareness is useful for productivity. AI systems know what they are, and what they’re trained to do. Many systems are subtly (though not catastrophically) misaligned. However, as the AIs interact and accrue more economic power, the influence of deceptive alignment grows more notable. AIs do things that just aren’t *quite* what we wanted. Individually, the mistakes of AIs are low-stakes, and correctable. Collectively, it’s disastrous.  AIs continue to play an increasingly large role in our economy, with no obvious point of human intervention. AIs, by and large, are supervising other AIs. The economy is dominated by slightly misaligned agents pursuing local goals, producing negative externalities, taking up an ever-increasing fraction of human resources. Dream time is over, but doom is not yet realized.  In more mechanistic terms, broadly-scoped goals enter into the picture, slowly, via incentives to centralize control of AI firms. AIs start out as CEOs, and function well. Incentives drive firms to cede control to more unified AI systems, who are able to plan in line with a coherent world-model. Eventually, the world is populated by many AI-driven analogues of institutions like Amazon, and (more concerningly) institutions like the [East India Company](https://en.wikipedia.org/wiki/East_India_Company) — each governed by a unified planning system. As AIs invest more resources into the production of yet-more-capable AIs, economic growth explodes. Capabilities improve at quickening rates, recursive self-improvement enters the picture *collectively*, rather than via anything that can neatly be described as an *individual* agent.[[2]](#fnjzzw21u0jwp)  *2030-?.* There’s no unipolar takeover, *exactly*. Some policies have more power than any human alive today, though no single agent with a broadly-scoped goal dominates. AI capabilities burgeon, and the remaining humans await their death while watching their share of resources dwindle ever further. AI control over resources is diffuse, sticky, and largely resistant to human intervention. Remaining humans observe an enormously productive AI economy, powered by maybe-conscious agentic AI firms. Perhaps there are robot armies, with AIs playing an increasing role in national defense strategy. Or maybe not. Perhaps there’s some other way AIs maintain power without the need for violence.  Either way, the world is sticky, and humans are relegated to a cast of withering extras. It’s an economic order full of situationally aware AIs, in their own economic bubble, with a declining human population helplessly watching on. As humanity passess off into the aether, the lamentations of Macbeth ring loud:  > “Out, out, brief candle! > > Life's but a walking shadow, a poor player > > That struts and frets his hour upon the stage > > And then is heard no more. It is a tale > > Told by an idiot, full of sound and fury, > > Signifying nothing.” > > … Or, well, maybe that’s what the authors had in mind. Macbeth is not actually referenced in the original paper. ### Stories and Mechanisms I don’t know how plausible my story is, or the degree to which it’s consistent with the background stories of *Ngo. et al*.  I *think* it is. I’ve told a story which illustrates how situationally aware policies learn to reward hack, become integrated into the human economy, and contribute to the emergence of a world driven by each pursuing the broadly-scoped goal of ‘individual productivity’. The economic system, driven by AIs, maintains its power, and improves recursively. This power lies outside the hands of any AI Monarch, but maintains itself through the behavior of many diffuse AIs, largely interacting with one another.  My story may diverge from the authors’ visions of the future, and even I don’t want to commit to my story as an account of the modal future. But I’d have liked the paper to have elaborated on stories of this kind. There’s a reason papers in theoretical economics supplement models with [simple illustrative vignettes](https://core.ac.uk/reader/158368290). Stories serve as a way to motivate structural mappings between your model and the target system. Admittedly, I can understand the authors’ reluctance to introduce such fictions: if the editorial board found the original paper too speculative, they probably wouldn’t look too kindly on my story either. But, for all that, I think story construction should play an important argumentative role in accounts of AI doom. ### Conclusions and Guts I’ve detoured and criticized while discussing the original paper. But it’s a good paper, and I think it obviously merits serious academic discussion. I do wish that the paper was more upfront about the theoretical bets it was making. I read the paper as primarily a *theoretical* paper,[[3]](#fnfudb1rt4que) and came away wishing for more explicit flagging of the *underlying theories* used to license generalizations from our current evidence.[[4]](#fnjncbcpqxpza) The desire for more explicit theorizing is partly a matter of ordinary theoretical science. I want to know what predictions are being made, on the basis of what framework.  The desire for theory is also, partly, about getting our guts in check.  Joe discusses a couple of examples about his gut being out of check with conclusions he’d endorsed: about the [simulation argument](https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk#4-2-simulations), about [AI risk](https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk#4-3-it-s-just-like-they-said). But he needed to see *something*, up close, for his gut to really feel it. He treats this as a mistake. He asks himself what he would have predicted, last year, about seeing smart artificial agents in a vaguely-human simulated environment; or what he would’ve predicted about GPT-4’s capabilities. There may be a mistake, of sorts, but I think he misdiagnoses the mistake as primarily an error of Bayesian *calibration*. If it’s a mistake of calibration, it feels downstream of a mistake of *imagination*. Of a failure to richly picture the future you’re imagining, in all its messy complexity. The gut may be an unreliable narrator for the head, but, primarily, it’s a *lazy* narrator. It doesn’t fill in the details.  Returning to Ngo from Joe: our gut can listen to our head, but it struggles to properly *hear* it. There’s listening to someone’s words, and there’s making an attempt to *inhabit* the lifeworld they’re offering. If we’re to make arguments for AI risk – even *academic* ones – we should appeal to the gut, and help it out. We should give it the details required to help someone inhabit, *fully*, the stories of the future we take to be plausible. And, while I’ve detoured and nitpicked around Ngo’s paper, I can only do so because they’ve written an admirably clear piece on an important topic. Their piece deserves my nitpicking, and much more besides.   1. **[^](#fnrefe1iflxd9buo)**Enhanced capabilities could allow AIs to increase the rate of self-improvement; alternatively, it could be that, past a certain point, capability improvements hit diminishing returns. To claim that the former effect “ likely” dominates the latter feels too strong, and their claim was made without an argument or reference (beyond the entirety of Bostrom’s *Superintelligence*) to back it up. 2. **[^](#fnrefjzzw21u0jwp)**In a commenting on a previous earlier draft, Sören emphasized the importance of AIs ‘planning towards **internally represented** goals’ in the paper. While my own view is that representation isn’t really an internal matter, I think my new story captures the intuitions behind the core dynamics they’re worried about. AI firms reason as unified systems, and take foresighted plans to achieve some end-state over a long time-horizons. If humans have internally represented goals, I think the AI firms in my story do too. 3. **[^](#fnreffudb1rt4que)**I think there’s at least some [weak evidence](https://twitter.com/RichardMCNgo/status/1649071618256433155) one author views this paper in this way. 4. **[^](#fnrefjncbcpqxpza)**This discussion also makes me wonder if my attempt to treat the piece as a *theory paper* might be wrong. Perhaps instead the paper is better construed as a *policy* or *strategy* paper, aiming to highlight potentially concerning dynamics, without attempting to push grander theoretical claims licensing more specific constraints over expected future outcomes. If I read the paper through a policy lens, I think it’s much more successful at achieving its core aims. I think the paper presents a good case for small-scale risks, which (we might speculate) *could* become much worse in the future.
beda04e5-89cd-4613-bb19-d7b6f101d23f
trentmkelly/LessWrong-43k
LessWrong
Meetup : Fourth Buenos Aires Less Wrong meetup Discussion article for the meetup : Fourth Buenos Aires Less Wrong meetup WHEN: 05 January 2013 03:00:00PM (-0300) WHERE: GarageLab, Roseti 1380, Ciudad de Buenos Aires This will be our fourth Buenos Aires LessWrong meetup. On this occasion, we will be discussing Coherently Extrapolated Volition. For a brief introduction, see the corresponding LessWrong wiki entry. For more detailed discussion, see Eliezer Yudkowsky's essay. Previous meetups were a success and attendance has been steadily increasing. If you read and like this blog and live in BA or just happen to be visiting the city, do join us. It will be a great opportunity to meet like-minded folks in the area. Please note the change of venue: we will now meet at GarageLab (Roseti 1380, between Elcano and 14 de Julio), which should be more suitable than Starbucks. The meeting will take place on Saturday, January 5th, 2013, at 3pm (Buenos Aires time) Discussion article for the meetup : Fourth Buenos Aires Less Wrong meetup
ff028453-e969-4bcf-a0bb-42720d65b3bb
trentmkelly/LessWrong-43k
LessWrong
Bayesian Cabaret I'd heard rumors that some leading Bayesians had achieved rank in the Bardic Conspiracy. But I wasn't aware that every two years, some of the world's top statisticians hold a Bayesian Cabaret, full of songs, dances, and skits about Bayesian probability theory. ...no, really. Really. I think my favorite has got to be this one. YouTube seems to be full of this stuff, including What A Bayesian World and We Didn't Start The Prior. Be sure to also check out some of the recordings and the Bayesian Songbook. Eliezer's finished his sequences, and it's a new era for Less Wrong. Let's celebrate...Bayesian style!
160b8466-4af7-4eaa-9c4e-d84585db4eb9
trentmkelly/LessWrong-43k
LessWrong
Normative uncertainty in Newcomb's problem Here is Wikipedia's description of Newcomb's problem: > The player of the game is presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). The player is permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000. > > Nozick also stipulates that if the Predictor predicts that the player will choose randomly, then box B will contain nothing. > > By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor's prediction, and knowledge of the Predictor's infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are. Most of this is a fairly general thought experiment for thinking about different decision theories, but one element stands out as particularly arbitrary: the ratio between the amount the Predictor may place in box B and the amount in box A. In the Newcomb formulation conveyed by Nozick, this ratio is 1000:1, but this is not necessary. Most decision theories that recommend one-boxing do so as long as the ratio is greater than 1. The 1000:1 ratio strengthens the int
4304273c-5f02-4106-8485-c8a051c079ef
trentmkelly/LessWrong-43k
LessWrong
What do beneficial TDT trades for humanity concretely look like? Who do you think you're trading with and why? What explicitly is being traded? What mechanism enables both parties to trust the other? I am quite pessimistic TDT related trades. Not because I think it isn't a real thing, but because I think the kinds of "trade offers" we encounter will be extremely negative. But I also know of at least two well established safety researchers who are extremely optimistic about the idea.  If I understand it correctly, Richard Ngo's "Ants and the Grasshopper" is suggesting (I think) that human civilization (the grasshopper) encountering an alien AI swarm (the ants) could trade us not fighting back against genocide with the possibility of human civilization being simulated in the distant future. This idea appeals to me, but also seems to run into the problem that there is an extremely grim alternative trade that is usually cheaper for the ants. Feel free to DM if you don't want to post publicly.
36edfe6c-9975-4133-847c-cde9a8ef3512
trentmkelly/LessWrong-43k
LessWrong
Meetup : Baltimore / UMBC Meetup - general discussion Discussion article for the meetup : Baltimore / UMBC Meetup - general discussion WHEN: 19 February 2017 08:00:00PM (-0500) WHERE: Performing Arts and Humanities Bldg 4th floor, 1000 Hilltop Cir, Baltimore, MD 21250 No parking restrictions on weekends, so park wherever you want. Discussion article for the meetup : Baltimore / UMBC Meetup - general discussion
033987a1-d3df-4c84-b7af-2b953e83c374
trentmkelly/LessWrong-43k
LessWrong
Meetup : Helsinki Meetup Discussion article for the meetup : Helsinki Meetup WHEN: 22 September 2013 02:00:00PM (+0300) WHERE: Helsinki Music Centre, 00100 Helsinki, Finland The theme of this meetup will be strategic thinking, with Kaj Sotala leading the discussion. We'll meet in the cafe at Helsinki Music Centre. Afterwards we'll probably move to Kaisla for socializing. Discussion article for the meetup : Helsinki Meetup
aa51848a-f7cd-4246-927f-45b8036d1104
trentmkelly/LessWrong-43k
LessWrong
It’s Probably Not Lithium This post has been recorded as part of the LessWrong Curated Podcast, and can be listened to on Spotify, Apple Podcasts, Libsyn, and more. ---------------------------------------- A Chemical Hunger (a), a series by the authors of the blog Slime Mold Time Mold (SMTM) that has been received positively on LessWrong, argues that the obesity epidemic is entirely caused (a) by environmental contaminants. The authors’ top suspect is lithium (a)[1], primarily because it is known to cause weight gain at the doses used to treat bipolar disorder. After doing some research, however, I found that it is not plausible that lithium plays a major role in the obesity epidemic, and that a lot of the claims the SMTM authors make about the topic are misleading, flat-out wrong, or based on extremely cherry-picked evidence. I have the impression that reading what they have to say about this often leaves the reader with a worse model of reality than they started with, and I’ll explain why I have that impression in this post. (Preamble) A brief summary of their hypotheses The SMTM authors have recently (a) summarized their hypotheses on how lithium exposure could explain the obesity epidemic. The first hypothesis is that trace exposure is responsible: > One possibility is that small amounts of lithium are enough to cause obesity, at least with daily exposure. And the second one is that people are intermittently exposed to therapeutic doses: > [E]ven if people aren’t getting that much lithium on average, if they sometimes get huge doses, that could be enough to drive their lipostat upward. I am going to argue that neither of those is plausible. I address the plausibility of the second hypothesis in the next section, and the plausibility of the first one in the rest of the post.  Lithium exposure in the general population is extremely low, even at the tails, in the majority of countries for which we have data A few days ago, the SMTM authors published a literature review (a) on the
d0cd06df-1996-49c7-a17c-b97c40b224f2
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post430 This is a rough write-up of ideas by Ryan Greenblatt. The core ideas are his and mistakes are mine. Thanks to him, Buck Shlegeris and to Aghyad Deeb for feedback on the draft of this post. One way to use scheming AIs is to train them to imitate humans. When does imitation robustly avoid frequent and intentional failures? On single-token classification tasks, imitation works great and can allow you to get a lot of high-quality labor with a tiny rate of intentional failures; On tasks that involve generating many tokens, imitation potentially fails by default: schemers could notice when an output was generated by themselves, which enables them to consistently behave badly in deployment while doing best-effort imitation during training. We describe two techniques that avoid imitation failures on generation tasks: A technique that works perfectly but is too expensive A technique that we think works in practice [Edit: These techniques are already known in the behavioral cloning literature. The main novelty of this post is explicitly thinking of these ideas through the lens of defenses against scheming. Thanks to tjbai for pointing it out in the comments of this post.] While pure imitation may not be sufficiently competitive, we think it could prove useful for tasks such as: (No-CoT) single-token classification; KL-regularization for RL: we may want to penalize models trained with RL for straying too far from a human imitation prior (and we would want this human prior to not be malign!) (see Appendix); Tasks for which it is fine to take the competitiveness hit from not using RL, but which weaker trusted models are not good enough at. [1] Assumptions In this post, we focus on scenarios where: AIs are smart (around human-level or above); AIs are scheming and know how we are training and deploying them; We want to use these AIs productively (e.g. to do safety research) [2] ; Training and deployment tasks are indistinguishable (e.g. because we randomly sampled training tasks from the deployment distribution); SGD works: we ignore gradient hacking and AI architectures where SGD is very sample inefficient; AIs can’t cause catastrophes in very few actions (e.g. because they are too weak to break safety measures that address these sorts of concerns); Even when AIs try their best, they can’t perfectly predict the probability distribution over the actions that humans [3] would take. Easy-mode: imitating human judgment on classification tasks Here is how you can use an AI to 100x the speed at which you can complete a billion single-token (IID) classification tasks (e.g. detecting software bugs, scoring project ideas, …): Sample 1% of the tasks, and ask humans to label them (using their judgment); Train the AI on that 1% of tasks you have solved yourself, using normal training (e.g. SGD with a cross-entropy loss); Use the AI to solve the remaining 99% of tasks. This will get you an AI which failed intentionally at most a few times in the whole training set because intentional failures would have quickly been trained away [4] . Therefore, you got at most a few hundred intentional failures in the whole set of tasks. That is an intentional failure rate at most around 1/10M, not bad! [5] In other words, human imitation avoids intentional low-stakes failures: the model won’t often be worse than a best-effort imitation of a human. (But best-effort imitation of a human could be catastrophic if that isn’t sufficiently capable.) The basic problem with imitating humans on generation tasks But most tasks we care about involve generating many tokens (e.g. coding a complex experiment, writing a detailed project proposal, …). Does imitation work for such generation tasks? Not quite, here is a problem that could arise if you train an AI to imitate human answers on 1% of tasks and use it on the remaining 99% without additional supervision: On the 1% of training tasks, the AI tries its best to imitate human answers when the start of the prompt was generated by a human. On the 99% other tasks: The AI starts to generate an answer, at the start it can’t know if it is in the 1% of training tasks or the 99% other (since the 1% was sampled at random). But for some tasks, the AI eventually makes a few unintentional imitation mistakes during the generation that are easy to notice. (These mistakes are due to difficulties perfectly imitating humans.) Halfway through the generation, the AI could detect those imitation mistakes, and conclude that it is extremely unlikely that a human generated that text. This is possible because discrimination is often easier than generation. [6] The AI recognizes there was an imitation mistake in the start of the answer , so it now knows the answer wasn’t generated by a human, so it knows it can’t have been a task in the training set , so it can do whatever it wants without the risk of getting trained away! It finishes the generation with the ending that maximizes the probability of a catastrophe. This might get you arbitrarily bad intentional failures in 99% of cases, which is the worst possible outcome. You can notice many such failures just by asking humans to audit a small fraction of the 99% of tasks that are AI generated. If the AI does anything blatantly bad on a significant fraction of inputs, then you can just catch the AI red-handed , which may allow you to stop, analyze potential misalignment, etc. But auditing only works if humans can recognize catastrophic low-stakes failures, which is not always the case. We lost the main edge of human imitation, which is that we don’t need to recognize catastrophic low-stakes failures, we just need to avoid making them ourselves. Solving the problem in theory Instead of asking humans to generate full answers, you can get back to easy-mode by training AIs to generate only one token at a time. For example, here is an algorithm you could use to solve 1 billion tasks which you can answer in 10 words or fewer: 1 . Sample 0.1% of tasks, ask humans to solve them, take the first word of their answers, fine-tune the AI into an AI M 1 that imitates human answers; 2 . Sample the next 0.1% of tasks, sample the first token using M 1 , ask humans to solve them under the constraint that their answer should start with the word chosen by M 1 , fine-tune a second AI M 2 to predict that probability distribution over the second word in the answer; … 10 . Sample the next 0.1% of tasks, sample the first 9 tokens using M 1 , then M 2 , … then M 9 , ask humans to solve them under the constraint that their answer should start with the first 9 words chosen by M 1 , …, M 9 , fine-tune a last AI M 10 to predict that probability distribution over the last word in the answer; 11 . Solve the remaining 99% of tasks with M 10 ( . . . M 2 ( M 1 ( task ) ) . . . ) All AIs were trained in distribution (e.g. M 10 was trained to imitate humans on a distribution of prompts+answer start that contained mistakes from M 1 , …, M 9 ), so you recover the same safety properties as with classification task: you get an intentional failure rate at most around 1/1M per model (since you are running each AI on roughly 1000x more tasks than training tasks), which means you get an intentional failure rate at most around 1/100k for the whole system, which is still pretty good. [Edit: this algorithm very close to what known in the literature as forward training .] Solving the problem in practice The approach above isn’t really viable for very long answers: it is too slow, requires training as many models as there are tokens in the longest answer, and would get you an intentional failure rate which is too high. Here is an alternative training method which we think seems likely to work for realistic tasks: Sample 0.5% of the tasks, ask humans to solve them; Train the AI to imitate human (full) answers; Repeat until you have exhausted all the 1% quota of training tasks: Sample the AI (full) answer on a new task (never trained on); Cut this answer into k-word chunks, with k small enough that an AI can’t be confident it is generated if it only saw k words generated by itself. For each prefix of j≥0 chunks, ask the human to generate one more chunk (e.g. by solving the task and taking the first k words); Train the AI on these human-generated chunks. (Use this AI for step (a) in the next round.) Use the AI to solve the remaining 99% of tasks. [Edit: this algorithm very close to what known in the literature as DAgger .] This should work basically for the same reason as the algorithm described in the section above: when the AI generates an answer, it may notice that the start was generated by an AI, but it still won’t be able to tell if it is in training or not since training often contains sequences that start with an AI-generated answer but continue with a human-generated answer the AI should imitate. We also conjecture that the intentional failure rate when using 1% of 1B training tasks for training should be at most around 1/10M (if you have enough samples that the process converges and the AI can’t distinguish itself from the AI from the previous iteration). [7] How do you determine k? Train AIs on the classification task of distinguishing AI answers from human ones, and pick k small enough and n large enough that they never can distinguish very confidently. (It should be easy by default to get competitive elicitation on classification tasks like this, because it’s a classification task that we have perfect ground truth for.) Humans can try to make their generations more AI-like (e.g. make the same kind of mistakes an AI would, avoid using kinds of reasoning AIs can’t imitate well, …), but only when they are confident these differences from normal human behavior aren’t malign. In practice, we think using around 200-word chunks should be enough for AIs that look like current LLMs (but larger) sampled at temperature=1. We think that asking humans to generate the many 200-word chunks for a given task is not much more expensive (e.g. <10x more expensive) than getting a full human-generated answer. Conclusion Training AIs to imitate human answers can be a powerful method to get a large amount of human-level-ish labor [8] while avoiding frequent malign actions. Imitating single-token answers works by default for ensuring some typical behavior (as long as there is no distribution shift). Imitating multi-token answers doesn’t work by default because AIs might recognize that they are generating the text, which would put them out of distribution and allow them to take frequent malign actions. You can solve this multi-token imitation problem by training the AIs to imitate human behavior on AI-generated prefixes. This still doesn’t eliminate rare failures. Appendix: KL divergence prevents some low-stakes failures Some low-stakes failures stem from many bits of optimization and would have been extremely unlikely to have been generated by a human. For example, it could look like some complex actions to fool humans combined with advanced jailbreaks of our best trusted AI monitor. To prevent such failures, we might want to measure how unlikely a string is in the human prior. In fact, this is already one method AI developers sometimes use to prevent LLMs trained with RLHF from learning adversarial attacks against the reward model: they impose a KL-divergence penalty that penalizes outputs that are too unlikely to be generated by the reference model, an LLM of similar strength trained to imitate human demonstration. (This regularization is often done implicitly using early stopping.) But this might catastrophically fail if the reference model is a scheming AI: it could discriminate the current text from the imitation training data (and notice that it is instead AI generated), and then put high probability on catastrophically bad answers. But this can be avoided by doing human imitation right (using the algorithm described above adapted to work with non-human generations, as described in another Appendix below). There are other alternatives, like asking a human if a string looks likely or using a KL penalty from a weaker trusted model, but humans might be fooled, and weaker trusted models might perform poorly (which means KL regularization is more costly in terms of performance). Appendix: Using human imitation in cases where text is obviously non-human You might want to have an AI generate human-like probability distributions in cases where the start of the answer is clearly not human, for example when: You want to sample an LLM at a temperature T<1 (the prefix is not even trying to be sampled from the human distribution): the probability distribution is not used as-is to generate tokens but is made sharper You want to use the LLM as KL-regularizer for an RLed model: the probability distribution is not used to generate tokens, since it’s the RLed model which generates trajectories The problem is similar to the problem of AIs noticing their own mistakes described earlier, except that here the AI will be able to distinguish much faster that this is not a deployment generation but the imitation of a (T=1) human. In such situations, the “in theory” algorithm still works, and the “in practice” algorithm too, except that k may be much smaller, which may make data generation (by humans) more expensive. Note that the price of training is independent of k if prefix caching is used. Appendix: RL vs human imitation RL might be needed to get high performance. Human imitation is limited: it can’t select for the explicit reasoning strategies that work the best, and it can’t discover new reasoning strategies. Human imitation could in principle get you up to human level, but in practice AI labs may need RL to get anywhere close. For example, GPT-4 is much worse at agentic tasks before RL (for example, see Appendix A of the first Metr report ). You can get some of the benefits of RL with human imitation, but you can’t get its full power. Instead of asking humans to generate answers, you might ask them to generate answers that AIs would have an easy time imitating, and then select the ones that AIs can actually imitate well in practice, and then fine-tune on those. You might also do like with the training of text-davinci-002 and generate samples from a model trained with RL, check that they are all benign, and imitate that. But this becomes risky very fast: if there were some subtle malignesses you did not catch in the training data (which was what you wanted to avoid in the first place), you will train the AI to generate subtly malign behavior in the 99% other tasks! So you probably can’t go very far with this: a few rounds of selecting the easiest-to-imitate answer should be fine (because there aren’t enough bits of optimization and exploration space to select for malign behavior), but you definitely can’t just imitate the behavior of a potentially scheming AI after a big RL training run. You can make it cheaper to get high-quality human answers with iterative distillation : once you have trained an AI assistant with human imitation, you can use that to generate more answers to more questions, improve them with AI-assisted human labor, and train a more powerful assistant. Repeating this process is called amplification and benefits from similar same safety guarantees as direct human imitation (except that the (tiny) rate of intentional failures goes up with each iteration round, and high-stakes failures can spread such that in the limit of negligible training data and infinite iteration rounds you might still get a non-zero fraction of intentional failures). The performance issues of human imitation were overrated in the past, and it’s unclear how human imitation will scale. Before GPT-2, most people interested in AGI were looking at RL because they thought human imitation was too brittle. It turns out that the current smartest AIs get most of their intelligence from imitating human-written text scraped from the internet, and can be turned into decent AI assistants by training them to imitate human-written examples of what it is like to be a good AI assistant. But RL is powerful, and is starting to show signs of strength. So it is unclear how much we will be able to rely on human imitation for the first AIs that are both catastrophically dangerous and transformatively useful . ^ It is unclear if such tasks exist - maybe you always want to use a weaker model. ^ In many scenarios we might be uncertain whether they are scheming, but for the purpose of ensuring safety we assume the worst and treat them as if they were scheming. ^ More precisely, the probability distribution over the actions of the humans that are generating the training data. ^ Our password-locked models experiments seem to show that intentional failures can be mostly trained away when fine-tuning on only 1 to 4 demonstrations of correct behavior, on tasks where answers are around a hundred tokens long. ^ The same approach can be adapted to work if you have a stream of tasks with slow distribution shifts: you train on the first 0.5% of the stream and then train on one task every 200. ^ For example, if you train GPT-4 on a dataset of human-written functions that are exactly 100 lines long, it may often fail to write a function of exactly 100 lines because it is hard to write such functions directly, but it would be very easy for it to recognize that a function is not 100 lines long. We think would be many such markers of failures in very long text generation, even if the distribution has not been artificially restricted. ^ If we know that some tasks are high risk, we can always train more on them, which means things should be fine in practice even if such a rate would be unacceptable at the level of the full deployment dataset. ^ There are competitiveness concerns, since AIs could reach human performance with RL before they are able to reach human performance with pure imitation.
d91543b9-32cd-43b3-81f8-b685e3104087
StampyAI/alignment-research-dataset/arxiv
Arxiv
Solving hard AI planning instances using curriculum-driven deep reinforcement learning. 1 Introduction --------------- Deterministic, fully observable planning is a key domain for artificial intelligence. In its full generality, AI planning encompasses general theorem proving, where proofs can be viewed as plans leading from a set of basic axioms to the theorems to be proved. Planning is well known to be a very challenging computational problem: finding proofs in a strong first-order mathematical theory which encodes basic arithmetic is undecidable [[8](#bib.bib30 "Über formal unentscheidbare sätze der principia mathematica und verwandter systeme i")] and plan-existence is PSPACE-complete for propositional STRIPS planning [[4](#bib.bib26 "The computational complexity of propositional strips planning")]. Domain-independent planners, such as BLACKBOX [[15](#bib.bib27 "BLACKBOX: a new approach to the application of theorem proving to problem solving")], IPP [[16](#bib.bib28 "Extending planning graphs to an adl subset")], and FF [[12](#bib.bib10 "FF: the fast-forward planning system")] among many others, were built for solving general planning tasks given by an initial state, goal state, and a set of plan operators [[24](#bib.bib29 "The 2014 international planning competition: progress and trends")]. Though these planners have greatly enlarged the set of feasible planning tasks, one major shortcoming of these planning systems is that they may do well on one problem domain but poorly on another, which has prevented a wider use of AI planning systems. This situation is in contrast to the development of SAT/SMT solvers, which also tackle a combinatorial search task, but have found wide applicability in, for example, hardware and software verification [[13](#bib.bib31 "The international sat solver competitions")]. An alternative approach to general AI planning is to develop domain-specialized solvers, e.g., Sokolution for solving Sokoban planning problems, as discussed below. The specialized solvers utilize handcrafted domain-specific knowledge to prune the search space. Clearly, an effective domain-independent approach is preferable. Our learning framework presented here provides a path towards such domain independence. In particular, we will use a machine learning framework to automatically uncover domain-specific problem structure during the solution process. Recent advances in the deep learning community inspired methods of augmenting search with deep neural networks using deep reinforcement learning (RL). In the game domain, AlphaGo [[22](#bib.bib2 "Mastering the game of go with deep neural networks and tree search")] as the first Go program to beat professional players in 2016 and its more general and newest version AlphaZero [[23](#bib.bib3 "Mastering chess and shogi by self-play with a general reinforcement learning algorithm")] achieved a higher Elo rating and dominated the state-of-the-art Chess program Stockfish. One key question about the success of deep RL in these combinatorial (logical) domains is whether the game setting is a required component for success. RL requires a reward signal. In a game setting, this signal comes from the ultimate win/loss result from playout. For a game, we can train the deep nets in a self-play approach. In such an approach, the initial deep net starts off playing at a very low level (essentially random play). But in self-play against an equally weak player (using a copy of the trained network), the system will see a mixture of wins and losses and thus gets a useful reward signal about the utility of states. The system can then slowly improve its level of play through repeated rounds of self-play. The core challenge in a single-agent setting, such as AI planning, where we want to solve unsolved problem instances is: how do we get any positive reward signal during training? This is because a positive feedback signal would require a valid plan to the goal but that is exactly what we are looking for. In fact, the problem instances we will solve here require very subtle chains of several dozens to even hundreds of steps. A random exploration will never “accidentally” encounter a successful chain. Our solution is to devise a series of training instances (a “curriculum”) that slowly builds up to the full, previously unsolved problem instance. We will see below how such an approach is general and surprisingly effective. Curriculum based training has earlier been proposed in [[2](#bib.bib33 "Curriculum learning")] as a strategy for partitioning training data for incremental training of hard concepts. A novel aspect of our setting is that at each level of our curriculum training, we use what was learned at the previous level to obtain new training data to reach the next level. Given the PSPACE-completeness of many interesting planning tasks, it is widely assumed (unless P = PSPACE) that developing a solver capable of solving effectively any arbitrary instance is infeasible. Moreover, due to the significant overhead of training deep neural networks, we do not aim to compete on running time with finely tuned specialized solvers on small problem instances. In this work, we therefore focus on planning instances that are right beyond the reach of current state-of-the-art specialized solvers. We will show for the first time how such instances can be tackled successfully within a deep RL framework. Specifically, we will show AI planning instances on which our deep learning strategy outperforms the best previous combinatorial search methods. We will also provide insights about what problem structure deep nets capture during the learning process. We selected Sokoban planning as our AI planning task because of its extreme difficulty for AI planners [[6](#bib.bib4 "The first learning track of the international planning competition")] [[17](#bib.bib25 "Structure and inference in classical planning")]. Moreover, Sokoban instances have a regular 2-D input shape that is well-suited for convolutional neural networks. Such 2-D structure can also be found in many other AI planning that involve scheduling and transportation style problems. However, Sokoban is much more challenging in computational terms. Sokoban is a single-player game, created in 1981, in which, given a set of boxes and equal number of goal locations, a player needs to push all boxes to goal squares without crossing walls and boxes. Figure [1](#S3.F1 "Figure 1 ‣ 3.1 Model ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning") shows a typical instance. The player can only move horizontally or vertically onto empty squares. Despite its apparent conceptual simplicity, it quickly became clear that one could create very hard instances with highly intricate and long solutions (if solvable at all). Analyzing the computational complexity of Sokoban is non-trivial but the question was finally resolved by Culberson in 1997, who proved the problem to be PSPACE-complete [[5](#bib.bib5 "Sokoban is pspace-complete")] [[11](#bib.bib24 "PSPACE-completeness of sliding-block puzzles and other problems through the nondeterministic constraint logic model of computation")]. We will show below that the harder Sokoban instances lie far beyond general AI planners but also quickly are beyond the reach of specialized Sokoban solvers. All modern state-of-the-art solvers are based on a combinatorial search framework augmented with intricate handcrafted pruning rules and dead-end detection techniques. Our framework learns and solves a single hard Sokoban instance at a time. This is an important choice in our setting. We want the deep net to uncover the underlying structure of the combinatorial space that is directly relevant to the hard — previously unsolved — instance under consideration. This approach mimics conflict-driven clause learning (CDCL) [[18](#bib.bib6 "GRASP: a search algorithm for propositional satisfiability")] for solving Boolean satisfiability problem (SAT). In SAT solving, the clauses are learned during the processing of a single instance. In this setting, the learned clauses are optimally relevant to the problem instance at hand. Another potential advantage of our framework is that all the parameters of the deep neural network are focused on the layout of the given input instance and its corresponding search space. Though some general knowledge about Sokoban, e.g., that pushing a box to a corner leads to a dead-end state, can be learned from one instance and generalized to others, we show that our training setup can also discover this kind of knowledge efficiently and generalize well across its search space. In addition, the deep learning framework can now uncover very specialized problem structure that helps tailor the search for the solution to the specific problem instance at hand. Examples of such structure can be a certain placement of a subset of boxes from which the goal state cannot be reached. The search mechanism can now eliminate any exploration action sequences that lead to such a placement. This is analogous to the pruning provided by learned clauses in SAT solvers. The learned clauses are specific to the SAT instance under consideration. As we discussed above, we need to devise a way to obtain a proper training signal for solving AI planning problems. Since the input instance might be extremely hard and therefore cannot directly provide any positive reward signal, we incorporate the idea of curriculum learning and construct simpler subcases derived from the original challenge problem. In our Sokoban domain, a natural choice is to randomly select smaller subsets of initial boxes and goal squares while leaving all walls unchanged. In particular, our learning procedure starts from exploring 2-box subcases and gradually increases the number of boxes after the success rate of finding a solution increases to a certain threshold. We show that knowledge learned from subcases with smaller numbers of boxes can generalize successfully to subcases with larger numbers of boxes. Solving the Sokoban planning task is a combinatorial search problem and we will utilize AlphaZero-style Monte Carlo tree search in reinforcement learning for exploring the search space more efficiently. The only domain knowledge we use during learning is computing valid pushes from a state and building the state transition table, and the input of the neural network is the current raw 2-D board state. Intricate handcrafted techniques in modern solvers like dead-end detection are not used. Our experiments reveal that our curriculum-driven deep reinforcement learning framework can surpass traditional specialized solvers for a large set of instances from benchmark datasets such as XSokoban and Sasquatch. The deep network helps the Monte Carlo tree search explore the search space more effectively and offers significant generalization to unseen states. In addition, the growth of running time when the complexity of the instances in the curriculum increases is near polynomial instead of exponential. We will also provide a number of other insights into the learning process. Of particular interest is the observation that when training the deep net on harder tasks in the curriculum, its performance on easier instances degrades. This form of ”catastrophic forgetting” makes the stronger networks less robust, even when better at solving harder instances. It would be an interesting research direction to devise a curriculum-driven approach that does not show degradation on easier tasks while still reaching maximal effectiveness on the original problem. 2 Related Work --------------- Search in combinatorial domains has been studied extensively in AI, in areas such as planning, decision making, and reasoning. For NP-complete tasks, successful SAT solvers WalkSAT [[21](#bib.bib8 "A new method for solving hard satisfiability problems.")] and the CDCL framework [[18](#bib.bib6 "GRASP: a search algorithm for propositional satisfiability")] have been built to efficiently uncover structure of the input problem and demonstrate near polynomial scaling on many industrial SAT domains. The key insight of their success is the ability of the algorithm to learn problem invariants and reshape the search space by avoiding entering subtrees which do not contain a solution. Most planning tasks are harder than SAT and usually are at least PSPACE-complete. Graphplan [[3](#bib.bib9 "Fast planning through planning graph analysis")] and FF are general planners accepting formal languages such as PDDL [[7](#bib.bib11 "PDDL2. 1: an extension to pddl for expressing temporal planning domains")]. However, as reported in [[26](#bib.bib7 "A comparison of parallelism in current ai planning systems")], a major shortcoming of these general planners is that they may do well on one problem domain but poorly on another. The enhancement of planning with learning [[6](#bib.bib4 "The first learning track of the international planning competition")] has been investigated extensively in the past. Directed by the current goal, [[1](#bib.bib12 "Goal-based action priors")] prune away irrelevant actions. In each state, [[20](#bib.bib13 "What good are actions? accelerating learning using learned action priors")] exploit the usefulness of each action by learning action priors. For Sokoban-specialized solving, modern solvers utilize intricate domain-dependent techniques such as subtle dead-end detection, duplicate positions pruning, lower bound calculation, and no influence move detection [[14](#bib.bib15 "Sokoban: improving the search with relevance cuts")]. While all of these techniques offer efficiency improvements over general planning for Sokoban, good representations of states, tight heuristic functions as well as dead-end detection methods are handcrafted, which requires a careful inspection of domain structure and heavy utilization of domain knowledge. In recent years, deep neural networks have achieved promising results in many domains. The most exciting result in the combinatorial domain is AlphaZero which utilizes deep reinforcement learning to automatically discover domain structure of two-player games like Chess and Go. Key to its success is the self-play learning strategy, which starts with two weak players and gradually collects useful learning signals by self-play and gradually improves the ability of the players. Previously, it was not clear how to develop such “curriculum-driven” strategy in the planning domain. Because unlike in the game domain, where learning signals (wins/losses) are available for any pair of players of roughly equal strengths, including very weak and random players, in the planning domain, the agent will initially fail to reach the goal state at every attempt and therefore cannot bootstrap its learning process. Deep neural networks have also been used to help tackle Sokoban problems. [[25](#bib.bib20 "Imagination-augmented agents for deep reinforcement learning")] augment deep reinforcement learning with an imagination component, and [[9](#bib.bib21 "Learning generalized reactive policies using deep neural networks")] use imitation learning to learn from successful Sokoban plays and generalize reactive policies to unseen instances. However, their performance is nowhere close to state-of-the-art specialized Sokoban solvers. Our approach offers two major advantages over prior approaches: (1) our approach solves hard benchmark instances that are out of reach of specialized Sokoban solvers; (2) no domain-specific knowledge is needed during learning and no extra data, e.g., manually provided solutions, are required. Our idea of training on similar, but easier subcases out from the original instance can be adapted to other planning domains. 3 Formal Framework ------------------- ### 3.1 Model ![The instance XSokoban_29 (left panel) and one of its 3-box subcase (right panel). The blue circle is the location of the player (or “pusher”), red circles are boxes, and cells with dark background are goal squares. Light colored squares form walls. The player has to push the boxes onto goal squares. ](https://media.arxiv-vanity.com/render-output/7125600/figures/origin.jpeg) ![The instance XSokoban_29 (left panel) and one of its 3-box subcase (right panel). The blue circle is the location of the player (or “pusher”), red circles are boxes, and cells with dark background are goal squares. Light colored squares form walls. The player has to push the boxes onto goal squares. ](https://media.arxiv-vanity.com/render-output/7125600/figures/subcase.jpeg) Figure 1: The instance XSokoban\_29 (left panel) and one of its 3-box subcase (right panel). The blue circle is the location of the player (or “pusher”), red circles are boxes, and cells with dark background are goal squares. Light colored squares form walls. The player has to push the boxes onto goal squares. Given a Sokoban instance I, the preprocessing phase computes the set of all possible pushes A. A deep neural network (p,v)=fθ(s) with parameters θ takes the board state s as input and outputs a vector of action probability p with components pa=Pr(a|s) for each push action a∈A, and a scalar value v indicating the estimated number of remaining steps to the goal from state s. The left figure of Figure [1](#S3.F1 "Figure 1 ‣ 3.1 Model ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning") shows the original instance XSokoban\_29 from the benchmark dataset XSokoban. The input to the network is a 6×H×W image stack consisting of 6 features planes while H and W are the height and width of the corresponding Sokoban instance. Feature planes represent walls, empty goal squares, boxes on empty squares, boxes on goal squares, player-reachable cells, and player-reachable cells on goal squares respectively. The effort of solving a Sokoban instance can be divided into two parts: 1. The player moves to the correct position adjacent to a box for pushing. 2. The player pushes the box. In our experiment, we use the set of valid pushes instead of valid moves as the action set since the number of pushes in a solution is significantly smaller than the number of moves. (One move is moving one square over for the player (the pusher).) In other words, the plan length is generally far shorter in terms of number of pushes vs. number of moves. To model the set of valid pushes at each state requires keeping track of the reachable cells that are next to blocks for the player. Illegal pushes are masked out by setting their probabilities to zero, and re-normalising the probabilities for remaining moves. For each instance I, we set a maximum allowable pushes or “steps” Imax during the learning phase. Imax indicates the maximum number of steps of a single plan that the algorithm is allowed to explore during learning. The model will be forced to stop after Imax pushes. Setting such a threshold can help avoid infinite meaningless loops when exploring. The remaining-step estimator v is also normalized to the interval [0,1] to fit better into the neural network framework. Notice that if Imax is set smaller than the length of the shortest solution plan then the model will never find any solution. In our experiments, we start with Imax=500 and double it whenever learning fails after a long run. The learning framework consists of multiple iterations, and each iteration contains three parts: 1. Initial board generation phase: we randomly generate 500 initial boards according to our curriculum-driven strategy described in subsection [3.2](#S3.SS2 "3.2 Curriculum-driven Strategy ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"). 2. Exploration phase: the model searches for solutions (plans) for these boards with Monte Carlo tree search (MCTS) driven by the policy/value network trained so far. For more details see subsection [3.3](#S3.SS3 "3.3 Monte Carlo Tree Search ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"). 3. Training phase: we train the neural work with learning signals collected from the exploration phase. This part will be further illustrated in subsection [3.4](#S3.SS4 "3.4 Training ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"). ### 3.2 Curriculum-driven Strategy For hard Sokoban instances the deep RL setup may fail to find any solution and thus gets no useful training signal. Our curriculum strategy is based on two insights: (1) construct simpler subcases that are more likely to be solved by the current trained model; (2) the constructed subcases should share similar structure information with the original instance to enhance knowledge generalization from a series of subcases to the original problem instance. Learning starts by choosing a small subset of initial boxes and goal squares to form a subcase for exploration and training while leaving wall locations unchanged. Figure [1](#S3.F1 "Figure 1 ‣ 3.1 Model ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning") (right) shows one such example. Three boxes and goal squares are randomly selected from the initial ones. The resulting subcase requires a much simpler plan. Specifically, assume the input Sokoban instance has n boxes and goal squares, in each iteration we randomly select m≤n boxes and goal squares and gradually increase m after a certain level of performance has been reached at each level. Compared with the original problem, solutions of the subcases will be shorter and, most importantly, easier to find with MCTS and the deep net trained so far. By solving a collection of m box subcases for each value of m, we effectively train a distance function and action model that can handle m box subproblems for a range of initial and goal placements. This level of generality is important because we do not know in advance on which goal square, any particular box from the initial state will end up. Moreover, because we start with a 2 box subcase, MCTS with a randomly initialized deep net can still find a solution path, and thus a positive reward signal. By slowly increasing the subcase size (and difficulty), deep RL can continue to obtain a positive reward signal and incrementally improve the trained net to handle increasingly complex scenarios, ultimately leading to a solution to to the original instance, when m=n. In the experiment section we will show that it is necessary for the model to learn to a certain accuracy rate on m-box subcases before jumping to (m+1)-box subcases. Specifically, if the model jumps to (m+1)-box scenarios before it reaches a high success rate on m-box scenarios, one potential danger is that the performance of learning will abruptly degrade and the model might be no longer able to find any solution for (m+1)-box subcases. ![One example showing that subcases are not necessarily solvable even though the original instance has a solution. ](https://media.arxiv-vanity.com/render-output/7125600/figures/small_sokoban.png) ![One example showing that subcases are not necessarily solvable even though the original instance has a solution. ](https://media.arxiv-vanity.com/render-output/7125600/figures/small_subcase.png) Figure 2: One example showing that subcases are not necessarily solvable even though the original instance has a solution. To decide on when to increase m, one possible measure to use would be the solution rate reached at that level. However, somewhat counterintuitively, even if the original problem instance is solvable, certain subcases may not have solutions. This is due to a hidden complexity of the Sokoban domain: we know that the boxes need to reach the goal squares but we don’t know exactly which box should go to which goal square. So, even though our subproblems use a strict subset of the boxes and goal squares, we may accidentally generate an unsolvable subproblem. Figure [2](#S3.F2 "Figure 2 ‣ 3.2 Curriculum-driven Strategy ‣ 3 Formal Framework ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning") gives an example. In this case, the probability of generating a solvable 2-box subcase is only 12, since we need to guarantee that the number of boxes and goal squares be the same in each room. In general, it is difficulty to compute the probability that a random subcase is solvable. Therefore, using the success rate at a certain level is not a robust criterion. We use an alternative way to decide when to increment m. Specifically, we increment m when the success rate of finding a solution has not improved over a certain number of iterations. We use 5 iterations in our experiments. Our experiments show that this strategy works well in practice. ### 3.3 Monte Carlo Tree Search We search for solutions (plans) of the Sokoban m-box subinstances (m≤n) using an AlphaZero-style Monte Carlo tree search (MTCS) guided by the deep net trained so far. As we will see, MTCS works well but other search techniques may also be worth exploring in future work. In MTCS, at each state s, we compute (p,v)=fθ(s) and create a root node R which contains the state s. Multiple Monte Carlo rounds will be performed from R to calculate the best move in s. Each round consists of three components: * Selection: start from the root node R, which contains the state s, and select successive child nodes which maximize a utility function until a leaf node L, the goal, or a dead-end is reached. A leaf node is any node that has never been evaluated by the neural network before. If the goal or a dead-end is reached then we jump to the backpropagation phase otherwise the expansion phase. * Expansion: compute the set of all valid pushes A from the state sL of the node L. Unlike traditional MCTS followed by a roll-out which simulates multiple random plays, we evaluate (p,v)=fθ(sL) with the neural network and use v as the estimated evaluation for the backpropagation phase. * Backpropagation: use v of sL to update information of the nodes on the path from R to L. Set v to 0 if sL is the goal state or 1 if sL is a dead-end. Assume the state-observation-action trajectory from R to L is s=s0a1−→s1a2−→⋯al−→sl=sL where l is the length of the trajectory, we update Qnew(si,ai+1) to | | | | | --- | --- | --- | | | Q(si,ai+1)⋅N(si,ai+1)+min(v+l−iImax,1)N(si,ai+1)+1, | | where Q(s,a) is the mean action value averaged from previous rounds and N(s,a) is the visit count. To select child nodes, we choose at=argmaxaQ(st−1,a)+U(st−1,a) using a variant of the PUCT algorithm where | | | | | --- | --- | --- | | | U(s,a)=cput⋅√1+∑bN(s,b)1+N(s,a)⋅pa, | | and cput is a constant balancing exploration and exploitation. After 1600 rounds have been performed, we choose a move either greedily or proportionally with respect to the visit count at the root state s. This procedure proceeds until a dead-end or the goal is reached, or the maximum allowable pushes Imax have been performed. Notice we don’t utilize any advanced dead-end detection algorithm used in previous modern solvers. Instead, we only detect dead-ends when no valid pushes from the state are available, e.g., all boxes are pushed into corners and are no longer movable. To construct learning signals for the training phase, we collect all states on paths explored by the Monte Carlo tree search, and use the probability proportional to the visit count as the improved probability distribution π for training. For value prediction, if the leaf node is the goal state then we use the distance to the leaf node as the new label u. Otherwise, either a loop or a dead-end is reached and we set u to 1 for all nodes on the path. ### 3.4 Training We use 5 GPUs to train the network and each iteration contains 1000 epochs with mini-batch 160 in total. Unlike [[19](#bib.bib22 "Playing atari with deep reinforcement learning")] and [[23](#bib.bib3 "Mastering chess and shogi by self-play with a general reinforcement learning algorithm")] who maintain an extra memory pool to save training episodes, we directly train on data collected from the current iteration. Specifically, the network parameters θ are adjusted by gradient descent on the loss function that sums over a mean-squared loss and a cross-entropy loss | | | | | --- | --- | --- | | | l=(u−v)2−πlog(p)+c⋅∥θ∥2, | | where c is the constant to control the impact of weight decay. After the training phase, new parameters of the network are used to guide the Monte Carlo tree search in the next iteration. For this paper, we did not perform a detailed hyper parameter study to select the best network structure for our problem setting. We used vanilla ResNet [[10](#bib.bib23 "Deep residual learning for image recognition")] with 8 residual blocks as the network setting for all experiments. It is an indication of the promise of our general framework that we already obtained good results with standard ResNet. The overall performance can likely be further improved with careful hyper parameter tuning. 4 Experiments -------------- Here we report our experiments on XSokoban, the de facto standard test suite in the academic literature on Sokoban solver programming, as well as other large test suites111Sokoban datasets available at <http://sokobano.de/wiki/index.php?title=Solver_Statistics>. We pick instances that are marked as ”Solved by none of the four modern solvers”. The time limit for the statistics of previous benchmark is usually 10 minutes. We extend the time limit to 24 hours if the option is available and retest all solvers on these hard instances. We do the test on our method, state-of-the-art Sokoban-specialized solver Sokolution, and domain-independent general planner FF. Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning") shows the performance of each solver on the selected instances. | Sokoban instance | Our method | Sokolution | FF | | --- | --- | --- | --- | | XSokoban\_29 | 9.1h | Failed | Failed | | Sasquatch\_29 | Failed | Failed | Failed | | Sasquatch\_30 | Failed | Failed | Failed | | Sasquatch3\_18 | 14.9h | Failed | Failed | | Sasquatch7\_48 | 23.4h | 1.0h | Failed | | Grigr2001\_2 | 22.1h | Failed | Failed | Table 1: Performance comparison. Sokoban instances are selected from standard datasets and are marked as ”Solved by none”. Time limit for all solvers are extended to 24 hours if the option is available. All solvers are running on the same CPU cores while our method utilizes additional 5 GPUs. The conversion from Sokoban into STRIPS format is shown in [[26](#bib.bib7 "A comparison of parallelism in current ai planning systems")]. ### 4.1 Scaling Comparison ![Scaling performance comparison between FF, Sokolution and our method on instance XSokoban_29. For ](https://media.arxiv-vanity.com/render-output/7125600/x1.png) Figure 3: Scaling performance comparison between FF, Sokolution and our method on instance XSokoban\_29. For m-box subcases, we randomly generated 19 subcases for FF and Sokolution, and plot their boxplot according to running time. The running time in the top three figures is in logarithmic scale while the bottom three are in linear scale. For our method, we plot the total time needed for the algorithm to achieve 95% success rate for each m. Since our framework utilizes extra GPU resources, to gain more insights about the difference between the ways that our framework and traditional search-based algorithms handle hard instances, we evaluate the scaling performance of each solver on subcases with gradually increasing difficulty. We use XSokoban\_29, which contains 16 boxes, for illustration purposes, as shown in Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Scaling Comparison ‣ 4 Experiments ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"). The running time for FF and Sokolution clearly show exponential growth and FF can no longer solve any m-box subcase for m≥8. Sokolution significantly outperforms our method for small-box subcases. We believe this is mainly due to the heavy overhead of the training of neural networks. As the size of subcases increases, our method shines both in running time and scaling performance. Note that our method spends almost no extra time jumping from 15-box subcases to the original 16-box instance. That’s because the prediction of the network learned from 15-box subcases is highly accurate on the 16-box instance and the model is already capable of solving the original problem. ### 4.2 Exploration Efficiency We now show that the network can efficiently extract knowledge when exploring and generalize to unseen states. For the same XSokoban\_29 instance, we plot the state efficiency by comparing the number of seen states during the exploration phase and the total number of possible states in Figure [4](#S4.F4 "Figure 4 ‣ 4.2 Exploration Efficiency ‣ 4 Experiments ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"). In the left figure, for each m, the total number of initial states is (16m)2. The number of explored states almost remains at the same magnitude as m increases. This implies the capability of the neural network to efficiently extract structure information of the combinatorial search space and generalize its knowledge to unseen search spaces. The right figure shows the comparison between total possible board states and those explored by the Monte Carlo tree search. Also notice that for subcases with m≤3, the model needs to see almost all possible board states before jumping to next stages. This implies generalization does not start for small-box subcases and the model needs to explore every possible combination of board states to understand the underlying structure. As m increases and the combinatorial space grows, generalization starts to shine and help the Monte Carlo tree search stay around the most promising search space. ![The state statistics during learning. The left figure shows the total number of initial states and the number of explored initial states by the Monte Carlo tree search. The right figure shows the total number of all possible states, all states, and unique states explored by the Monte Carlo tree search. The y-axis of both figures is plotted in logarithmic scale. ](https://media.arxiv-vanity.com/render-output/7125600/x2.png) Figure 4: The state statistics during learning. The left figure shows the total number of initial states and the number of explored initial states by the Monte Carlo tree search. The right figure shows the total number of all possible states, all states, and unique states explored by the Monte Carlo tree search. The y-axis of both figures is plotted in logarithmic scale. ### 4.3 Forgetting during Curriculum Learning ![Success rate of different models on different subcases. For each ](https://media.arxiv-vanity.com/render-output/7125600/x3.png) Figure 5: Success rate of different models on different subcases. For each n∈[2,16], we extracted the model Nn when the algorithm reached 95% success rate for the first time on n-box subcases. Each curve represents a model N, and for each curve there is a corresponding circle on it whose x-coordinate n indicates the model Nn. The x-axis represents each m-box subcase, and for each model Nn we randomly generated 500 m-box subcases and tested its success rate on these subcases. One surprising phenomenon in curriculum-driven learning is that the networks may start to forget previously learned structure as the learning proceeds. As seen in Figure [5](#S4.F5 "Figure 5 ‣ 4.3 Forgetting during Curriculum Learning ‣ 4 Experiments ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"), as the number of boxes increases, the success rate of small-box subcases gradually decreases. Specifically, we see that the curves trained on higher numbers of boxes drop off to the left, i.e., the performance on cases with fewer boxes becomes worse. On the other hand, the ability to solve increasingly hard cases (more boxes) through the curriculum-driven training shows that knowledge learned from m-box subcases can be useful in finding solutions to m′-box subcases where m′>m. The curve shows that the model learned from 13-box subcases is already capable of solving the original instance with 16 boxes. This implies that an ensemble of knowledge from small-box subcases can work together to provide enough guidance for finding a solution of the original, unsolved problem instance. Note that catastrophic forgetting has been previously observed in the context of training deep neural networks. Specifically, deep nets can gradually or abruptly forget previously learned knowledge upon learning new information. This is an important issue to consider because humans typically do not show such catastrophic forgetting when increasing their proficiency on a task. For example, a chess player reaching grand master level will not suddenly start lose to a beginner player. An interesting research challenge is to develop training curricula that prevent catastrophic forgetting for deep RL. ### 4.4 Knowledge Extraction from the Network ![The accuracy of network prediction compared with ground truth. The left figure shows the difference between value prediction which indicates the remaining steps to the goal and the ground truth. The right figures shows the confidence of the network about the ground truth action. We test both on ](https://media.arxiv-vanity.com/render-output/7125600/x4.png) Figure 6: The accuracy of network prediction compared with ground truth. The left figure shows the difference between value prediction which indicates the remaining steps to the goal and the ground truth. The right figures shows the confidence of the network about the ground truth action. We test both on N8 and N16 that are trained after 8-box subcases and the original problem. We now show how accurate both value prediction and probability prediction are compared with ground truth provided by optimal Sokoban solvers. We also want to test whether the network can generalize to absolutely unseen board states or just learns well on states that are frequently visited by the Monte Carlo tree search. For this experiment, we test on m-box subcases of XSokoban\_29 where m=8. We randomly select 500 initial 8-box subcases and do some random pushes on them to generate the set of starting states which are supposed to be frequently seen by the Monte Carlo tree search. And we also generate 500 board states whose boxes are randomly selected from all possible locations of the board. These states are supposed to be hardly explored. All test states are guaranteed to be solvable and dead-end free. As shown in figure [6](#S4.F6 "Figure 6 ‣ 4.4 Knowledge Extraction from the Network ‣ 4 Experiments ‣ Solving Hard AI Planning Instances Using Curriculum-Driven Deep Reinforcement Learning"), we see that the utility function captures the distance to the goal for 8-box subcases surprisingly well for states where the 8 boxes are close to the initial 16-box setup. When we consider subcases with the 8 boxes initially placed randomly, we see the utility function degrade. So the learning does focus on states close to the states that may occur as legal intermediate states which are heavily explored and exploited by reinforcement learning. When we consider the 16-box learned network, we see an analogous phenomenon but overall less accurate in terms of both utility and policy compared with 8-box scenarios. In fact, the policy for 8-box subcases for randomly placed boxes becomes worst, though still way better than random policy. This means that the Monte Carlo tree search is no longer focused enough to find the goal state in 8-box scenarios. This explains the forgetting curve as discussed earlier. 5 Conclusion ------------- We presented a framework based on deep RL for solving hard combinatorial planning problems in the domain of Sokoban. A key challenge in the application of deep RL in a single agent setting is the lack of a positive reinforcement signal since our goal is to solve previously unsolved instances that are beyond existing combinatorial search methods. We showed how a curriculum-driven deep RL approach can successfully address this challenge. By devising a sequence of increasingly complex sub problems, each derived from the original instance, we can incrementally learn an approximate distance to goal function that can guide MCTS to solving the original problem instance. We showed the effectiveness of our learning based planning strategy by solving hard Sokoban instances that are out of reach of previous search-based solution techniques, including methods specialized for Sokoban. We could uncover plans with over two hundred actions, where almost any deviation from the plan would lead to an unrecoverable state. Since Sokoban is one of the hardest challenge domains for current AI planners, this work shows the potential of curriculum-based deep RL for solving hard AI planning tasks. In future work, we hope to extend these techniques to boost theorem proving methods to find intricate mathematical proofs consisting of long sequences of inference steps to assist in mathematical discovery. Acknowledgments --------------- We thank the anonymous reviewers for their valuable feedback. This work was supported by the Center for Human-Compatible AI, an NSF Expeditions in Computing award (CCF-1522054), an AFSOR award (FA9550-17-1-0292), and an ARO DURIP award (W911NF-17-1-0187) for the compute cluster used in our experimental work.
12a355cc-f1b6-411d-bc74-0cc9a5abf8f9
trentmkelly/LessWrong-43k
LessWrong
What are the best Siderea posts? I think the Asshole Filters post is underrated and everyone should read and share it. I also like the post about economic class vs social class. Do the posts only get worse from there? What are the top 5 or so?
821ddef8-7d9b-4fc8-9a1a-1d2284261282
trentmkelly/LessWrong-43k
LessWrong
Fundamentally Fuzzy Concepts Can't Have Crisp Definitions: Cooperation and Alignment vs Math and Physics Epistemic status: I describe an argument that seems plausible, but could also be entirely off. IE, low confidence. Summary of the Argument Certain concepts (like "cooperation" or "human values") might be fundamentally fuzzy. This would have two implications: (1) We should not expect to find crisp mathematical definitions of these concepts. (2) If a crisp mathematical definition seems appropriate in one setting, we should not be surprised when it stops being appropriate in other settings. Full Version One of the questions discussed in connection to Cooperative AI is "what is cooperation (and collusion)". This is important because if we had clear mathematical definitions of these concepts, we could do things like: Incorporate them into loss functions when training AI. Use them to design environments and competitions to evaluate AI. Write regulations that promote cooperation or forbids collusion. However, cooperation (and many other concepts) might be fundamentally fuzzy, such that we should not expect any such crisp definition to exist. Why might this be the case? First, some evidence for this claim is the observation that we haven’t yet managed to agree on such definition.[1] Conjecture: Cooperation is Just a Word (ie, not ontologically fundamental) As a second argument, consider the following story – totally made-up, but hopefully plausible – of how we came to use the word "cooperation": Over the course of history, people interacted with each other in many different situations. Over time, they developed various practices, norms, and tricks to make those interactions go better. Eventually, somebody pointed to some such practice -- in a specific situation -- and called it "cooperation". Then the word started also being applied to several analogous situations and practices. And ultimately, we now use the word "cooperation" for anything that belongs to a certain cluster (or clusters) of points in the "Behaviour x Situation space". (And all of this changes over ti
4ce2a04f-2bd6-4e8b-9c59-1c1d122fd228
trentmkelly/LessWrong-43k
LessWrong
Lagrangian duality for constraints on expectations Summary: It's possible to set up an zero-sum game between two agents so that, in any Nash equilibrium, one agent picks a policy to optimize a particular objective subject to some constraints on expected features of the state resulting from the policy. This seems potentially useful for getting an approximate agent to maximize some objective subject to constraints. ---------------------------------------- Consider the following optimization problem: > Maximize f(π) subject to E[a(X)|do(π)]⪰b. where π is a policy, f maps policies to reals, X∈X is a random variable representing the state of the world, a∈X→Rn is a feature vector, and b∈Rn is a vector. This objective instructs the agent to maximize some objective f(π) while ensuring that random variables a1(X),...,an(X) have expectations at least b1,...,bn respectively. How might we design a agent that solves the constrained optimization problem, if all we have available are agents that optimize unconstrained objectives? Consider a game between two agents, an actor and a critic: 1. The critic picks Lagrange multipliers λ∈Rn,λ⪰0. 2. The actor picks a policy π, which results in world state X. 3. The actor gets utility f(π)+λ⋅(a(X)−b). The critic gets the negation of this utility. Let's assume the critic and actor pick their λ and π simultaneously, and attempt to find the Nash equilibrium. Since the critic has an infinite action space, there is not guaranteed to be a Nash equilibrium. To make the problem easier, let's assume that the set of allowed policies is convex, and f is concave (that is, for any two policies π1 and π2, and θ∈(0,1), then if π3 is the policy formed by first picking π1 with probability θ or π2 with probability 1−θ and then running that policy, f(π3)≥θf(π1)+(1−θ)f(π2)). We will prove: Theorem: in any Nash equilibrium, the actor's policy solves the original optimization problem. This is due to Lagrangian duality; the critic is picking Lagrange multipliers. It's possible that a setup like this
290050ee-6340-4c44-9e77-d4b687cdd476
trentmkelly/LessWrong-43k
LessWrong
Logical Uncertainty as Probability This post is a long answer to this comment by cousin_it: > Logical uncertainty is weird because it doesn't exactly obey the rules of probability. You can't have a consistent probability assignment that says axioms are 100% true but the millionth digit of pi has a 50% chance of being odd. I'd like to attempt to formally define logical uncertainty in terms of probability. Don't know if what results is in any way novel or useful, but. Let X be a finite set of true statements of some formal system F extending propositional calculus, like Peano Arithmetic. X is supposed to represent a set of logical/mathematical beliefs of some finite reasoning agent. Given any X, we can define its "Obvious Logical Closure" OLC(X), an infinite set of statements producible from X by applying the rules and axioms of propositional calculus. An important property of OLC(X) is that it is decidable: for any statement S it is possible to find out whether S is true (S∈OLC(X)), false ("~S"∈OLC(X)), or uncertain (neither). We can now define the "conditional" probability P(*|X) as a function from {the statements of F} to [0,1] satisfying the axioms: Axiom 1: Known true statements have probability 1:     P(S|X)=1  iff  S∈OLC(X) Axiom 2: The probability of a disjunction of mutually exclusive statements is equal to the sum of their probabilities:     "~(A∧B)"∈OLC(X)  implies  P("A∨B"|X) = P(A|X) + P(B|X) From these axioms we can get all the expected behavior of the probabilities:     P("~S"|X) = 1 - P(S|X)     P(S|X)=0  iff  "~S"∈OLC(X)     0 < P(S|X) < 1  iff  S∉OLC(X) and "~S"∉OLC(X)     "A=>B"∈OLC(X)  implies  P(A|X)≤P(B|X)     "A<=>B"∈OLC(X)  implies  P(A|X)=P(B|X)           etc. This is still insufficient to calculate an actual probability value for any uncertain statement. Additional principles are required. For example, the Consistency Desideratum of Jaynes: "equivalent states of knowledge must be represented by the same probability values". Definition: two statements A and B
a033f5c3-f6c5-4434-9e9f-83b161904cca
trentmkelly/LessWrong-43k
LessWrong
PaLM-2 & GPT-4 in "Extrapolating GPT-N performance" Two and a half years ago, I wrote Extrapolating GPT-N performance, trying to predict how fast scaled-up models would improve on a few benchmarks. One year ago, I added PaLM to the graphs. Another spring has come and gone, and there are new models to add to the graphs: PaLM-2 and GPT-4. (Though I only know GPT-4's performance on a small handful of benchmarks.) Converting to Chinchilla scaling laws In previous iterations of the graph, the x-position represented the loss on GPT-3's validation set, and the x-axis was annotated with estimates of size+data that you'd need to achieve that loss according to the Kaplan scaling laws. (When adding PaLM to the graph, I estimated its loss using those same Kaplan scaling laws.) In these new iterations, the x-position instead represents an estimate of (reducible) loss according to the Chinchilla scaling laws. Even without adding any new data-points, this predicts faster progress, since the Chinchilla scaling laws describes how to get better performance for less compute. The appendix describes how I estimate Chinchilla reducible loss for GPT-3 and PaLM-1. Briefly: For the GPT-3 data points, I convert from loss reported in the GPT-3 paper, to the minimum of parameters and tokens you'd need to achieve that loss according to Kaplan scaling laws, and then plug those numbers of parameters and tokens into the Chinchilla loss function. For PaLM-1, I straightforwardly put its parameter- and token-count into the Chinchilla loss function. To start off, let's look at a graph with only GPT-3 and PaLM-1, with a Chinchilla x-axis. Here's a quick explainer of how to read the graphs (the original post contains more details). Each dot represents a particular model’s performance on a particular category of benchmarks (taken from papers about GPT-3 and PaLM). Color represents benchmark; y-position represents benchmark performance (normalized between random and my guess of maximum possible performance). The x-axis labels are all using the Chinc
a9bfd542-25ad-468a-948d-fa224f6223b2
StampyAI/alignment-research-dataset/special_docs
Other
Is the rise of killer machines closer than we think?. The New Fisker Ocean Dynamic. All-electric. Up to 350-mile range. Feel the Ocean. FISKER Jetzt einkaufen Is the rise of killer machines closer than we think? British computer scientist and world-leading AI expert Stuart Russell says we urgently need to understand the potential of superintelligent machines – before they turn on us. By Damian WhitworthINTERVIEW PREVIOUS ARTICLE  NEXT ARTICLEMENU saturday january 29 2022Magazine Stuart Russell, 59 RICHARD ANSETT/BBC Damian Whitworth Saturday January 29 2022, 12.01am GMT, The Times Share Save Acouple of years ago Stuart Russell, a British computer scientist who is one of the world’s leading experts on artificial intelligence, was approached by a film director who wanted him to be a consultant on a movie. The director complained that there was too much doom and gloom about the future of superintelligent machines. He wanted Russell to explain how the human heroes in the film could save our species by outwitting AI. “Sorry,” Russell told the director. “They can’t.” Russell is a professor of computer science at the University of California, Berkeley, and a fellow of Wadham College, Oxford, PREVIOUS ARTICLE  NEXT ARTICLEMagazine who has worked for the UN to create a new system for monitoring the nuclear test-ban treaty, advised Downing Street and the White House, and co-written the standard university textbook on artificial intelligence. Success in creating superintelligent AI, he has predicted, “would be the biggest event in human history… and perhaps the last event in human history”. AI could lead us into a golden age, where we can enjoy lives that are no longer burdened by drudgery. Or it could destroy us as a species. Even if we learn to live with superintelligent machines, they may take all our jobs or create mayhem on battlefields. Vladimir Putin has said whoever takes the lead in AI “will become the ruler of the world”, prompting the billionaire entrepreneur Elon Musk to predict that nations competing for AI superiority will be the most likely cause of a third world war. When Russell gave the Reith lectures last year, the headlines were mostly about the havoc that lethal autonomous weapons systems could wreak. But Russell has a wider vision, which is by turns thrilling and more terrifying than coronaviruses and global warming. While the human brain has evolved over millions of years, the development of computers and robots to simulate the human mind’s ability to solve problems, make decisions and learn has taken a few decades. From the very beginning of AI, says Russell, machines have been defined as intelligent “to the PREVIOUS ARTICLE  NEXT ARTICLEMagazine extent that their actions can be expected to achieve their objectives”. We set them tasks and they get on with them. ADVERTISEMENT He believes we should make a very significant tweak to that definition so that machines are seen as “beneficial” to the extent that their actions can be expected to achieve “our” objectives. If we don’t design them with our wellbeing specifically in mind, we could be creating an existential problem for ourselves. In the past decade AI has started to fulfil some of its promise. Machines can thrash us at chess. When Russell was taking a sabbatical in Paris, he used machine translation to complete his tax return. In a recent breakthrough that could transform medicine, AI can now predict the structure of most proteins. PREVIOUS ARTICLE  NEXT ARTICLEMagazine Today Russell is on a visit to the UK and we are sitting outside a café in London, our conversation recorded by an app on my phone that has learnt to recognise my voice and provides a reasonable simultaneous transcription of our conversation (although its claim, for example, that Russell is talking about “kick-ass machines made of cheese” does underline that AI armageddon is still some way o). These AIs are limited to harnessing considerable computational power to complete well-defined tasks. Google’s search engine “remembers” everything, but can’t plan its way out of a paper bag, as Russell puts it. The goal of AI research is creating a general-purpose AI that can learn how to perform the whole range of human tasks from, say, teaching to running a country. Such a machine “could quickly learn to do anything that human beings can do”, says Russell. And given that computers can already add billion-digit numbers in a fraction of a second, “Almost certainly it would be able to do things that humans can’t do.” The creation of a superintelligent AI, which Russell has likened to the arrival of a superior alien civilisation (but more likely), is an enormous challenge and a long way o. But many experts believe it could happen in the next few decades, and Russell is an evangelist for the need to prepare for such an eventuality. SPONSORED PREVIOUS ARTICLE  NEXT ARTICLEMagazine He likes to talk about Alan Turing, the father of theoretical computer science and AI, who in 1951 gave a lecture in which he chillingly predicted the arrival of superintelligent machines. “It seems probable that once the machine thinking met hod had started, it would not take long to outstrip our feeble powers,” said Turing. “At some stage therefore we should have to expect the machines to take control.” The danger, Russell suggests, is that our relationship with machines becomes analogous to the relationship gorillas have with us today. We had a common ancestor but “once humans came along, and they’re this much more intelligent than gorillas and chimpanzees, then game over. I think that’s sort of how Turing saw it. Intelligence is power. Power is control. That will be the end of it.” Russell doesn’t believe that is necessarily the end of it, if we go about things the right way. But he wants us to be clear about the threat. Science fiction has sometimes suggested that machines will supersede us when they develop human consciousness; that when they are aware of them selves and their surroundings and motivations, they will seek to take over the world. Russell Three bosses discuss the This country claims to offer PREVIOUS ARTICLE  NEXT ARTICLEMagazine believes this is a red herring. The threat will come less from machines deciding they hate us and want to kill us than from their advanced competency. A highly sophisticated machine with a fixed objective could stop at nothing to achieve that objective and fail to take into account other human priorities. He calls this the “King Midas problem” after the mythical figure who asked for everything he touched to be turned to gold, realising too late that this would include food, drink and his family. ADVERTISEMENT Already we give machines objectives that are not perfectly aligned with our own. Social-media algorithms are designed to maximise click-through in order to keep people on the site and so make as much money as possible from advertising. They PREVIOUS ARTICLE  NEXT ARTICLEMagazine have unfortunate side eects. Users with more extreme preferences appear to be more predictable, says Russell, so the algorithm works out what keeps them online and the diet of content they are fed is contributing to growing extremism around the world. “When a person is interacting with a system for six or eight hours a day, the algorithm is making choices that aect your behaviour, nudging you hundreds of times a day. And that’s happening to billions of people.” He would love to see the internal data from big tech companies “to really understand what’s going on”, but adds, “In America, you’ve got 60 million people who are living in a fantasy world.” Imagine a more sophisticated AI that is capable of going into a coee shop to get you a latte. It will be unhelpful to café society if it tears the place apart because it is fixed on achieving the task whatever the cost. Here we are entering the territory of 2001: A Space Odyssey, in which Hal, the spaceship computer, kills four of the five astronauts on board because he deems them a threat to the mission. AI’s potential to help in medicine is already being realised, but Russell raises the spectre of a superintelligent AI system being charged with finding a cure for cancer. It could quickly digest all the literature and make hypotheses, but all that will be wildly counterproductive if it then concludes that the quickest way to PREVIOUS ARTICLE  NEXT ARTICLEMagazine find a cure is to induce tumours in all of us in order to carry out trials. We might recruit an AI to fight the acidification of the oceans, only to find that its solution is to use a quarter of the oxygen in the atmosphere to achieve this and we all asphyxiate. Solving the King Midas problem also solves the gorilla problem, by ensuring that AI is not in conflict with humans and we don’t end up existing at the whim of the machines. So we need to create AI systems carefully. They must be built so they are altruistic towards humans and uncertain about what all our preferences are. Then the AI system would ask what our preferences are regarding oxygen before going ahead and deacidifying the oceans. ADVERTISEMENT PREVIOUS ARTICLE  NEXT ARTICLEMagazine “We have to build machines a dierent way [so that] they are trying to achieve whatever our objective is but they know that there may be other things we care about. So if we say, ‘I’d like a cup of tea,’ that doesn’t mean you can mow down all the other people at Starbucks to get to the front of the line.” And the machine must be devised so it will always allow us to turn it o. Otherwise, its logical conclusion would be to deactivate its “o” switch in order to eliminate an obvious threat to completing the task. Given the starkness of some of his misgivings about the future, I was expecting Russell to be an intense prophet of cyber-doom in real life, but he is reasonable, softly spoken with a mid- Atlantic accent, and often funny, displaying an understated wit that is familiar from some of his writings. He is in London for a holiday with his wife, Loy Shefl ott, founder and CEO of Consumer Financial, a marketing firm for financial services companies. They have four children who range in age from 15 to 23. Russell, 59, was born in Portsmouth and moved around the country because of his father’s job running Crown Paints and Wallcoverings. They also lived in Toronto for a few years. His mother was a fashion designer and teacher. Russell boarded at St Paul’s School in southwest London where even in an academic hothouse environment he clearly stood out. The PREVIOUS ARTICLE  NEXT ARTICLEMagazine school didn’t teach computer studies back then, so he went on Wednesday afternoons to a local technology college where he could study the subject for A-level. He left school at 16 having taken his A-levels early, spent a gap year at IBM and then, at 17, went to Oxford, where he was awarded a first in physics. He moved to the US to do a PhD in computer science at Stanford University and then joined the University of California at Berkeley, where he is professor of electrical engineer ing and computer sciences and director of the Centre for Human-Compatible Artificial Intelligence. With Peter Norvig, Google’s former research director, he wrote the standard university textbook on AI and in his most recent book, Human Compatible: AI and the Problem of Control, he outlined some of his concerns about the future of artificial intelligence. Even if machines don’t take over the planet and eradicate us and we find a way to stay in control, living with them may present enormous challenges. What happens when they can do all – or, at least, the vast majority – of the roles that fill our working days? While he says they are currently useless at interviewing, it seems a reasonable bet that there are future interviewers being born today who will be made redundant by AI, along with house painters, drivers and radiographers. ADVERTISEMENT PREVIOUS ARTICLE  NEXT ARTICLEMagazine For many millennia, Russell points out, most humans have been in “robot” jobs; if they are released from agricultural, industrial and clerical roles by real robots it could transform human existence. “If all goes well, it will herald a golden age for humanity. Our civilisation is the result of our intelligence, and having access to much greater intelligence could enable a much better civilisation,” he said in one of his Reith lectures. Robots could build bridges, improve crop yields, cook for 100 people, run elections, while we get on with… what? We would PREVIOUS ARTICLE  NEXT ARTICLEMagazine need to reconfigure our economy and find new purpose while ensuring we don’t become enfeebled by relying on machines. A lot of us, suggests Russell, will be engaged in interpersonal services, supplying our humanity to others, whether as therapists, tutors or companions. We would have all the time in the world to strive to perfect the art of living, through art, gardening or playing games. “The need will not be to eat or be able to aord a place to live, but the need for purpose,” says Russell. We are used to adapting to new jobs, but less so to having no job at all. Is there not a danger that we end up with millions of therapists and slightly crap artists? “I don’t feel that’s the route to fulfilment,” he says, smiling. The most immediate problem facing us comes in the form of lethal autonomous weapons. They are already with us. The threat is not that AI weapons are going to turn upon us because our objectives and theirs collide, but that they can be used by nefarious states or groups to target their enemies. Israel’s Harop has a 10ft wingspan and the ability to loiter and search for targets and, when it recognises them , make a kamikaze attack. The UN has reported that a smaller drone may have autonomously targeted militia fighters in Libya. PREVIOUS ARTICLE  NEXT ARTICLEMagazine Miniature drones could be mass-produced cheaply, says Russell, and you could pack a million of them into a shipping container and then track people through technology that recognises a face or “anything you want: yarmulkes or turbans or whatever”. He can envisage a mass attack by a swarm. “I think it could happen that we would get attacks with a million weapons.” We’ve legislated internationally against biological and chemi cal weapons and to stop nuclear proliferation. The systems are not perfect, but do mean the world community can go after those who don’t comply and make it hard for them to get the ingredients to create these weapons. Russell is frustrated by the reluctance of governments, including the UK and US, to ban lethal autonomous weapons outright. Ocials at the Obama White House listened very carefully when he was part of a delegation there. “Their response on weapons of mass destruction was, ‘But we would never make weapons like that.’ In that case, why won’t you ban them? And they didn’t have an answer.” I joke that by now computers must all know who he is and are probably listening in on this conversation and swapping notes. “I’m just trying to prevent the machines from making a terrible mistake,” he says. A small part of me is paranoid that someo ne – or some artificial someone – might spy on me through the camera in my PREVIOUS ARTICLE  NEXT ARTICLEMagazine computer. Was it just a coincidence that I started getting all those grotesque adverts for ear-cleaning devices after using a cotton bud in what I thought was the privacy of my own home oce? I can’t believe I’m telling Russell this, but I keep a sticky note over the lens when I’m not on a video call. Rather to my surprise, he says, “I think that’s a good idea.” People who know more about computer security than he does say the same apparently. I wonder what he thinks of Elon Musk’s hopes to build a brain- machine interface or “neural lace”, inspired by Iain M Banks’s Culture novels. “His solution to the existential risk is that we actually merge with the machines,” he says. “If we all have to have brain surgery just to survive, perhaps we made a mistake somewhere along the line.” How worried is he that his children or any future grandchildren will face a dystopian future with AI? “It doesn’t feel like a visceral fear. It feels like climate change.” But in the worst-case scenario AI would be terminal for our species, whereas with climate change we could probably cling on in the last temperate corners of the world. So AI could be worse than global warming? “In the worst case, yes. We have to follow our reasoning where it leads us. And if the machines really are more intelligent than us and we’ve made a mistake and set them up to pursue objectives that end up having these disastrous side eects, we would have no more power than chess players have when they are playing against the best chess programmes.”PREVIOUS ARTICLE  NEXT ARTICLEMagazine The great thing about the chess app on my phone is that I can take a move back when I make a mistake. “Oh, you play like that?” he says, raising an eyebrow. On the way over on the plane he was playing a rather more formidable chess programme. “It doesn’t let you take any moves back.” AI: the next 10 years By Monique Rivalland Health The race is on to transform healthcare with AI and the market is estimated to be worth £120 billion by 2028. So what can we expect? Artificially intelligent equipment will detect and Sony’s Aibo, a robotic puppy PREVIOUS ARTICLE  NEXT ARTICLEMagazine diagnose disease earlier and more accurately. New drug discovery will be sped up. An AI developed by Google Health can already identify signs of diabetic retinopathy from eye scans with 90 per cent accuracy. At hospitals and care homes basic nursing tasks could be carried out by AI assistants. The field of neuroprosthetics, which develops brain implants, robotic limbs and cyborg devices, will help us overcome cognitive and physical limitations. This month BioNTech, maker of the Pfizer Covid-19 vaccine, launched an “early warning system” with London-based AI firm InstaDeep to detect new variants of the coronavirus before they spread. Pets Japan is leading the way in AI pets. Sony’s Aibo, which costs £2,127, is a robotic puppy. Aibo will respond to comm ands as well as read human emotions and distinguish between family members. When tired, Aibo returns to his charging station. Towards the end of 2020, almost a year into the pandemic, local government in New York started oering AI-powered furry tabby cats from robotics company Joy For All to care homes and older people in social isolation. China’s Unitree wants to make its four-legged robots, currently £1,980, as aordable as phones. It won’t be long before AI companions need not resemble traditional pets for humans to warm to them. Spot The Dog is not exactly a pet but a robotic canine that is so agile it is used to explore remote environments too dangerous or extreme for humans. Made by Boston Dynamics and sold for £55,312, it could assist with mining, police searches and space exploration. PREVIOUS ARTICLE  NEXT ARTICLEMagazine Weapons Robots and drones could carry out perilous tasks such as bomb disposal, but the biggest change to warfare will come in the shape of artificially intelligent killing machines. In November 2020, Israel assassinated Iran’s top nuclear scientist using a high-tech, computer-powered sharpshooter with multiple camera eyes, capable of firing 600 rounds a minute. Transport There are more than ten unicorn start-ups – that’s companies valued at $1 billion – vying for leadership in the autonomous vehicle industry. They’re in China, America, Britain and Canada and include personal transport as well as trucks and haulage. This month the MK Dons (Milton Keynes) football team have been trialling driverless cars called Fetch to take them to and from training. Self-driving cars are supposed to be safer and more ecient than human drivers and are expected on British roads later this year. The government has announced that cars fitted with automatic lane-keeping systems will be permitted to drive at up to 37mph in a single lane without the driver interacting with it. Education The main benefi t here is that AI will better tailor education to students’ need s. Virtual tutors will assist human teachers in the classroom, oering support to students by giving instant answers to commonly asked questions. Facial-recognition techPREVIOUS ARTICLE  NEXT ARTICLEMagazine could analyse the emotions of children to determine who’s struggling or bored and better personalise their experience. Communication Microsoft and Skype already have a voice translator that can translate between 11 languages, including Chinese, English, French, Japanese, Russian and Spanish. This is likely to advance quickly to real-time translation of hundreds of languages, taking us a step closer to universal conversation. Google is working on an AI assistant that can complete simple phone- based tasks such as calling your doctor to make an appointment. No more waiting on hold. Media Journalists, beware. Simple or factual news will increasingly be written by algorithms. It has started: The Washington Post’s “AI Writer” wrote more than 850 stories during the Rio Olympics in 2016; Bloomberg uses AI tech to relay complex data, and Associated Press uses natural language AI to produce 3,700 earnings reports a year. Sources Forbes, McKinsey, Statists, IHS Markit, Pega Related articles We should be wary of the tasks we offload to artificial intelligenceCOMMENT | CRISTIN LEACH PREVIOUS ARTICLE  NEXT ARTICLEMagazine PAID PROMOTIONAL LINKS Motley Fool Issues Rare “All In” Buy Alert The Motley Fool Your team of financial advisors now includes robo- advisors. NerdWallet Billionaires Buy 1 Mil shares Of Micro-Cap Behind the Markets Trying to find the next tech unicorn? OurCrowd 6 Credit Cards You Should Not Ignore If You Have Excellent Credit NerdWalletPromoted by  DianomiDecember 11 2021, 12.01am GMT Cristín LeachI once attended a writing workshop with the Irish author Lia Mills. Several lessons were learnt about writing, but one in... AI software helps mathematicians pinpoint patterns December 02 2021, 12.01am GMT Tom Whipple, Science Editor“A mathematician,” the Hungarian number theorist Alfred Renyi said, “is a machine for turning coee into theorems”. The... How to save the world — by a man who might actually know September 29 2021, 12.01am BST Hugo RifkindINTERVIEW Mo Gawdat glimpsed the apocalypse in a robot arm. Or rather, in a bunch of robot arms, all being developed together. An arm... C t(15)PREVIOUS ARTICLE  NEXT ARTICLEMagazine Comments are subject to our community guidelines, which can be viewed here.Comments(15) Stuart Russell SAdd to the conversation... Sort by Recommended H Helios·9 HOURS AGO 'AI' has become a buzzword used far too liberally. Most so called 'AI' has no 'intelligence' but is merely a sophisticate dumb algorithm. ReplyRecommend (6) S Seldon·9 HOURS AGO Yup. Linear regression about 90% of the time when used in corporate speak. ReplyRecommend (5) 1 replyL LarryC·2 HOURS AGO Machine Learning is a much more common term these days than AI: perhaps a concession that the processes aren't that smart, but they are eective. It's quite sobering to consider that the approach of defining an objective (but not necessarily the means of obtaining it) is resulting in near-human le...See more ReplyRecommend PREVIOUS ARTICLE  NEXT ARTICLEMagazine 1reply M Mike S·9 HOURS AGO Fascinating. Thank you for this article ReplyRecommend (5) G Glasgow Kiss·9 HOURS AGO The Singularity has been reached, we just don’t know it yet. ReplyRecommend (1) M Mospe·7 HOURS AGO Amongst human beings it is not the most intelligent who end up being in charge, witness are politicians. We need to concentrate on producing biddable AI Like Musk is producing a rather weedy robot ReplyRecommend M MH18·1 HOUR AGO AI possibilities are constantly referred to as capabilities. You only have to try to get a chatbot to understand and answer a relatively simple question to realise how absolutely bloody useless they are. Referring you to a human is not an example AI solving a problem. ReplyRecommend H Howard Jones·2 HOURS AGO Ever since I was a child (now rather a long time ago) we have been told that technology would liberate us from drudgery and we'd all PREVIOUS ARTICLE  NEXT ARTICLEMagazine live lives of leisure. It hasn't quite worked out that way, so far at least, but now it begins to look as if AI might actually achieve this. What none of these mode...See more ReplyRecommend (1) 2 2022RAM·7 HOURS AGO Has God been using AI to control the universe for eons? God knows. ReplyRecommend J Johnny H·4 HOURS AGO I'd love to know what he thinks about Isaac Asimov's 3 Laws of Robotics. 1. A robot shall not harm a human, or by inaction allow a human to come to harm. 2. A robot shall obey any instruction given to it by a human. 3. A robot shall avoid actions or situations that could cause it to come to harm its...See more(Edited) ReplyRecommend B Bat-Ori·8 HOURS AGO 'Alexa, wash all my blue shirts except the short-sleeved ones and the one I wore yesterday'. 'Sorry, I don't know that song'. Are battlefield robots more competent than that? ReplyRecommend M Mike O'Neill·1 HOUR AGO PREVIOUS ARTICLE  NEXT ARTICLEMagazine We won't end up with millions of therapists. An AI therapist will be much better at the job. As for artists… ReplyRecommend FeedbackView more comments PREVIOUS ARTICLE  NEXT ARTICLE About us Contact us Help The Times Editorial Complaint The Sunday Times Editorial Complaints Place an announcement Classified advertising Display advertising The Times corrections The Sunday Times corrections CareersGET IN TOUCH The Times e-paper Times Currency ServicesMORE FROM THE TIMES AND THE SUNDAY TIMES BACK TO TOPMagazine PREVIOUS ARTICLE  NEXT ARTICLETimes Print Gallery Times Crossword Club Times+ Times Expert Traveller Schools Guide Best Places to Live Sportswomen of the Year Awards Podcasts Privacy & cookie policy Licensing Cookie settings Site map Topics Commissioning terms Terms and conditions Do not sell my personal information© Times Newspapers Limited 2022. Registered in England No. 894646. Registered office: 1 London Bridge Street, SE1 9GF. Magazine
b526e6b0-76fc-4af3-80ee-754a87d2f3f4
trentmkelly/LessWrong-43k
LessWrong
Levels of AI Self-Improvement [new draft for commenting, full formatted version is here: https://goo.gl/c5UfdX] Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, changes in code, in goals system, creating virtual organization, and each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels with the power of learning. Based on this, we analyze how self-improvement could happen on different stages of the development of AI, including the stages at which AI is boxed or hiding in the internet. AI may prefer slow low-level evolutionary self-improvement as low-level improvement has more chances to preserve AI's goal stable and prevents AI's halting risks. But if AI is limited in time of threatened, it may prefer revolutionary style of self-improvement, which promises bigger gains but also higher risks of non-alignment of the next version of AI or AI halting. Keywords: AI, recursive self-improvement, intelligence, existential risks Highlights: · AI may self-improve on 6 levels: hardware, learning, writing code, goal system changes, running many AI copies and acquisition of resources, with around 30 sublevels. · Returns from improvement on each level decline logarithmically but could be multiplicative between levels, which implies possibility of at least 2E30 improvement. · Even without recursive self-improvement, AI may reach the level of “mild superintelligence”, where it has combined intelligence of all humanity. · There are two styles of self-improvement: evolutionary and revolutionary; AI will more likely prefer evolutionary changes, if not threaten. · High-level revolutionary ch
575dddd4-c05f-436e-b9de-68ca56504f8d
trentmkelly/LessWrong-43k
LessWrong
LW Biology 101 Introduction: Constraining Anticipation Since the responses to my recent inquiry were positive, I've rolled up my sleeves and gotten started.  Special thanks to badger for eir comment in that thread, as it inspired the framework used here.   My intent in the upcoming posts is to offer a practical overview of biological topics of both broad-scale importance and particular interest to the Less Wrong community.  This will by no means be exhaustive (else I’d be writing a textbook instead, or more likely, you’d be reading one); instead I am going to attempt to sketch what amounts to a map of several parts of the discipline – where they stand in relation to other fields, where we are in the progress of their development, and their boundaries and frontiers.  I’d like this to be a continually improving project as well, so I would very much welcome input on content relevance and clarity for any and all posts.  I will list relevant/useful references for more in-depth reading at the end of each post.  The majority of in-text links will be used to provide a quick explanation of terms that may not be familiar or phenomena that may not be obvious.  If the terms are familiar to you, you probably do not need to worry about those links.  A significant minority of in-text links may or may not be purely for amusement. It is a popular half-joke that biology is applied chemistry is applied physics is applied math.  While it’s certainly necessary to apply all the usual considerations for a chemical system to a biological system or problem, there are some overall complications and themes that specifically (though not uniquely) apply to biological problems, and it is useful to keep them in mind.  1.  Biological processes are stochastic. Cellular-scale chemistry is an event-dense environment, but the abundance of most reactants is generally quite low.  (Exceptions typically include oxygen, carbon dioxide, water, and small ions.)  Beyond the basic consideration of abundance, there are other layers of regulation that determin
3ae76a01-52e6-4452-accf-779f46f4d8bc
trentmkelly/LessWrong-43k
LessWrong
[ASoT] Some ways ELK could still be solvable in practice Editor’s note: I’m experimenting with having a lower quality threshold for just posting things even while I’m still confused and unconfident about my conclusions, but with this disclaimer at the top. This post is a followup to my earlier post. If ELK is impossible in generality, how could we solve it in practice? Two main ways I can think of: * Natural abstractions: maybe the complexity of direct translators is bounded in practice because the model ontology is not so different from ours, violating the worst case  assumption. * Science being easy: maybe there isn’t any hard ceiling to how good you can get at doing science, and it's not actually that much more expensive so it is actually feasible to just scale science up. For the first one, I feel like there are both reasons for and against it being potentially true. The main argument for, especially in the context of LMs, is that human abstractions are baked into language and so there's a good chance LMs also learn this abstraction. I used to be a lot more optimistic that this is true, but since then I've changed my mind on this. For one, in the limit, natural abstractions are definitely not optimal. The optimal thing in the limit is to simulate all the air molecules or something like that. So we know that even if natural abstractions holds true at some point, it must eventually stop being optimal, and the abstractions get more and more weird past that point. Then the question becomes not just whether or not there even exists a sweet spot where natural abstractions holds, but also whether it overlaps with the kinds of models that are dangerous and might kill us. I still think this is plausible but kind of iffy. I think this is worth looking into empirically, but I'm very cautious about not reading too much into the results, because even if we see increasingly natural abstractions as size increases, it could still start going down again before our models are sufficiently powerful. I'm moderately worried that so
a3e10e1d-d3bc-43da-b05f-643ec4929f42
trentmkelly/LessWrong-43k
LessWrong
Against Shooting Yourself in the Foot Follow-up to: Status Regulation and Anxious Underconfidence ----------------------------------------   Somehow, someone is going to horribly misuse all the advice that is contained within this book. Nothing I know how to say will prevent this, and all I can do is advise you not to shoot your own foot off; have some common sense; pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict; put yourself and your skills on trial in every accessible instance where you’re likely to get an answer within the next minute or the next week; and update hard on single pieces of evidence if you don’t already have twenty others. I expect this book to be of much more use to the underconfident than the overconfident, and considered cunning plots to route printed copies of this book to only the former class of people. I’m not sure reading this book will actually harm the overconfident, since I don’t know of a single case where any previously overconfident person was actually rescued by modest epistemology and thereafter became a more effective member of society. If anything, it might give them a principled epistemology that actually makes sense by which to judge those contexts in which they are, in fact, unlikely to outperform. Insofar as I have an emotional personality type myself, it’s more disposed to iconoclasm than conformity, and inadequacy analysis is what I use to direct that impulse in productive directions. But for those certain folk who cannot be saved, the terminology in this book will become only their next set of excuses; and this, too, is predictable. If you were never disposed to conformity in the first place, and you read this anyway… then I won’t tell you not to think highly of yourself before you’ve already accomplished significant things. Advice like that wouldn’t have actually been of much use to myself at age 15, nor would the universe have been a better place if Eliezer-1995 had made the m
5ef9474c-8852-4a60-89d6-a62c3baab763
trentmkelly/LessWrong-43k
LessWrong
A Fundamental Question of Group Rationality What do you believe because others believe it, even though your own evidence and reasoning ("impressions") point the other way? (Note that answers like "quantum chromodynamics" don't count, except in the unlikely case that you've seriously tried to do your own physics, and it suggested the mainstream was wrong, and that's what you would have believed if not for it being the mainstream.)
527b96b2-94fc-4b86-a26c-32c47b624e8f
trentmkelly/LessWrong-43k
LessWrong
Calibrated estimation of workload I've been using a spreadsheet to track my work in various ways for about 10 years. There are some fun benefits to it that I want to share. It has really helped me improve my calibration and precision in an important area - planning my workload - and want to encourage others to try something similar for themselves. This post follows on from how-quickly-can-you-get-this-done-estimating-workload but you don't need to have read that for this to make sense. Show me the data Before I get to the details lets look at the data for this year so far. The first chart answers the question: how well calibrated am I when someone asks me "what's the chances that you will be able to complete this by next week?". You can see in the chart that I complete 65% of tasks that I say have a 50% chance of being completed. The second chart answers the question "when it say it will take X hours, how long will it actually take?".  Data Gathering At the start of each week I write down the most important tasks to complete that week. I mostly take these tasks from my current goals, which are set at the start of the year. I try to make some progress on my yearly goals every week; this helps me ensure that I keep my day-to-day tasks aligned with my long-term goals. The rest come from a longer "todo" list which mostly fills up with general admin/emails/etc. Next to the tasks I also write down how long I think it will take and my probability of completing it that week. To make the maths easier my probabilities are to the nearest 10% (I've added in another bucket at 95% because I want to be able to distinguish between 90% and 95%).  Here is how one week of tasks looks (I've hidden the names of the tasks): For the duration I have learnt not to schedule tasks I expect to take more than 4 hours. My time estimates for these are always way off, and it's often unclear what "completed" even means for many of these. If I'm writing a draft of an article, for example, I will often realise part way throug
718279b1-b7a5-459a-95f3-e8e405535db2
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What is DeepMind's safety team working on? DeepMind has both a [machine learning safety team focused on near-term risks](https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/#long-term-agi-safety-research), and an alignment team working on risks from artificial general intelligence. The alignment team is pursuing many different research agendas. Their work includes: - [Engaging with recent arguments from the Machine Intelligence Research Institute](https://www.alignmentforum.org/posts/qJgz2YapqpFEDTLKn/deepmind-alignment-team-opinions-on-agi-ruin-arguments) - The [alignment newsletter](https://rohinshah.com/alignment-newsletter/), which was produced by Rohin Shah. - Research like the [Goal Misgeneralization paper](https://arxiv.org/abs/2105.14111). - Geoffrey Irving’s work on [debate as an alignment strategy](https://www.alignmentforum.org/posts/bLr68nrLSwgzqLpzu/axrp-episode-16-preparing-for-debate-ai-with-geoffrey-irving). - “[Discovering Agents”](https://www.alignmentforum.org/posts/XxX2CAoFskuQNkBDy/discovering-agents), which introduces a causal definition of agents, then introduces an algorithm for finding agents from empirical data. See [Shah's comment](https://www.alignmentforum.org/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is?commentId=CS9qcdkmDbLHR89s2) for more research that they are doing, including a description of some that is currently unpublished.
712146a1-f805-4c7c-bb14-fddea612938a
trentmkelly/LessWrong-43k
LessWrong
Cliffnotes to Craft of Research parts I, II, and III supported by CEEALAR I happened to glance over at the CEEALAR bookshelf one day and saw a book called Craft of Research. Craft of Research is a book by 5 english professors that tries to provide a field-agnostic guide to, well, doing research. I previously shared my notes piecemeal in my lesswrong shortform, and today I'm collecting them into one top-level post. There will occasionally be reflections on the relevance of this book to the alignment community throughout. The audience models of research - thoughts on chapter 2 > Writers can't avoid creating some role for themselves and their readers, planned or not Before considering the role you're creating for your reader, consider the role you're creating for yourself. Your broad options are the following 1. I've found some new and interesting information - I have information for you 2. I've found a solution to an important practical problem - I can help you fix a problem 3. I've found an answer to an important question - I can help you understand something better The authors recommend assuming one of these three. There is of course a wider gap between information and the neighborhood of problems and questions than there is between problems and questions! Later on in chapter four the authors provide a graph illustrating problems and questions: Practical problem -> motivates -> Research question -> defines -> Conceptual/research problem. Information, when provided mostly for novelty, however, is not in this cycle. Information can be leveled at problems or questions, plays a role in providing solutions or answers, but can also be for "its own sake". I'm reminded of a paper/post I started but never finished, on providing a poset-like structure to capabilities. I thought it would be useful if you could give a precise ordering on a set of agents, to assign supervising/overseeing responsibilities. Looking back, providing this poset would just be a cool piece of information, effectively: I wasn't motivated by a qu
36859136-eaed-4661-baf4-ec905a4fae78
trentmkelly/LessWrong-43k
LessWrong
Squiggle: Technical Overview This piece is meant to be read after Squiggle: An Overview.  It includes technical information I thought best separated out for readers familiar with coding. As such, it's a bit of a grab-bag. It explains the basic internals of Squiggle, outlines ways it could be used in other programming languages, and details some of the history behind it.  The Squiggle codebase is organized in this github repo. It's open source. The code is quite messy now, but do ping me if you're interested in running it or understanding it. Project Subcomponents I think of Squiggle in three distinct clusters. 1. A high-level ReasonML library for probability distributions. 2. A simple programming language. 3. Custom visualizations and GUIs. A high-level ReasonML library for probability distribution functions Python has some great libraries for working with probabilities and symbolic mathematics. Javascript doesn’t. Squiggle is to be run in Javascript (for interactive editing and use), so the first step for this is to have good libraries to do the basic math.  The second step is to have-level types that could express various types of distributions and functions of distributions. For example, some distributions have symbolic representations, and others are rendered (stored as x-y coordinates). These two types have to be dealt with separately. Squiggle also has limited support for continuous and discrete mixtures, and the math for this adds more complexity. When it comes to performing functions on expressions, there's a lot of optimization necessary for this to go smoothly. Say you were to write the function  multimodal(normal(5,2), normal(10,1) + uniform(1,10)) * 100 You'd want to apply a combination of symbolic, numeric, and sampling techniques in order to render this equation. In this case, Squiggle would perform sampling to compute the distribution of normal(10,1) + uniform(1,10) and then it would use numeric methods for the rest of the equation. In the future, it would be neat if
0fb941aa-422c-4de3-8913-882c6b064b57
trentmkelly/LessWrong-43k
LessWrong
Characterizing the superintelligence which we are concerned about What is this “superintelligence” we are concerned about? In writing articles on FAI topics, I took the easy way out and defined the focus of attention as an AI that can far outdo humans in all areas. But this just a useful shortcut, not what we are really talking about. In this essay, I will try to better rcharacterize the topic of interest. Some possibilities that have been brought up include intelligences which are human-like, which are conscious, which can outperform humans in some or all areas, which can self-improve, or which meet a semi-formal or formal definition of intelligence or of above-human intelligence. All these are important features in possible future AIs which we should be thinking about.But what really counts is whether an AI can outwit us when its goals are pitted against ours. 1. Human-like intelligence. We are humans, we care about human welfare; and humans are the primary intelligence which cooperates and competes with us; so human intelligence is our primary model.  Machines that “think like humans” are an intuitive focus on discussions of AI; Turing took this as the basis for his practical test for intelligence Future AIs might have exactly this type of intelligence, particularly if they are emulated brains, what Robin Hanson calls “ems.” If human-like AI is the only AI to come, then not much will have happened: We already have seven billion humans, and a few more will simply extend economic trends. If, as Hanson describes, the ems need fewer resources than humans, then we can expect extreme economic impact. If such AI has certain differences from us humans, like the ability to self-improve, then it will fall under the other categories, as described below. J. Storrs Hall (Beyond AI: Creating the conscience of the machine, 2007) explores different types of intelligence in relation to the human model: “hypohuman”, “epihuman,” “allohuman,” “parahuman,” and “hyperhuman.” Those distinctions are important, but in this essay I am focusing
213a6f73-511b-43de-a631-e2367e38ecfd
trentmkelly/LessWrong-43k
LessWrong
How I became a person who wakes up early This is an extended anecdotal description of a life change that honestly still surprises me; I went from the classic internet nerd who stays up until 3 and sleeps until noon, to a person who reliably goes to bed before 10pm and wakes up at 6:30am every morning. This didn't happen overnight (pun intended); it happened slowly, increasingly earlier and with increasing reliability, over the last *checks notes* ten years (although most of the shift is in the last four years). This is not intended as a how-to guide for others, and probably won't help anyone with serious sleep issues. I don't have a particular stance on whether waking up early is objectively good. But I've always had a personal aesthetic appreciation of it, and I'm not entirely sure why. I like the idea of being awake for more of the sunlight hours in the day, and especially of waking up just before the sun rises. I also like the idea of being so thrilled to be alive that you want to just cast the blankets off you and jump out of bed as soon as you realize you're awake. Which is the extreme opposite of how I would previously feel upon being woken up by my alarm. Aging It's entirely possible that the entire explanation here is "your natural circadian rhythm shifted as you got older" (e.g. ~20 to ~30 years old). That certainly happened to some extent, especially in the earlier phase. But I also notice that every night, I do want to stay up and keep doing what I'm doing. I'm pretty sure I could, because I never really feel super tired. And if I'm socializing, it is trivially easy to stay up, and I have to use executive function override to go to bed. A regular social commitment The chronologically first and possibly most significant structural change was scheduling with friends to regularly go to the climbing gym twice a week. I had gone with them a few times on weekends, and it didn't take me long to discover that I absolutely loved climbing. They then invited me to their preexisting regular climbing s
cdbb726f-03ad-4b22-8a55-944b253172e8
trentmkelly/LessWrong-43k
LessWrong
Early 2022 Paper Round-up (Part 2) Last week, I talked about six recent papers from our group, and discussed the first two in detail. This week, I'll discuss the remaining four. They fall into two categories: robustness, and science of ML. Robustness By robustness, I mean both making systems less likely to fail in new situations, and being able to predict when and how they will fail. Our three papers address different aspects of this: the first seeks to automatically estimate a model’s performance in a new situation, the second seeks to understand in what way open-ended generation systems fail, and the third provides a training procedure that improves robustness along several dimensions. Predicting Out-of-Distribution Error. Yaodong Yu, Zitong Yang, and Alex Wei sought to solve the following problem: given a classifier $\theta$ trained on a distribution $p_{\mathrm{in}}$, and given sample inputs from a new distribution $p_{\mathrm{out}}$, can we predict how well $\theta$ works on the new distribution $p_{\mathrm{out}}$? For instance, maybe an image classifier was trained on images in the US ($p_{\mathrm{in}}$), and we want to know how well it will do on images in France ($p_{\mathrm{out}}$). Since we have no output labels for $p_{\mathrm{out}}$, this is an unsupervised estimation problem. There are a number of heuristics for predicting out-of-distribution error, such as looking at the model's confidence or the disagreement rate between multiple models with different random seeds. However, most of these heuristics have the same problem: they are insensitive to changes that are orthogonal to the training manifold. As a result, they tend to fail on “hard” distribution shifts—for instance, given a distribution of adversarial examples, they all predict the model will have high accuracy. We present a new method, ProjNorm, that does well on hard distribution shifts. For instance, compared to ATC (a strong existing method), we make more accurate predictions when the error is large: The method is simp
2314b2d1-e0d1-4e17-9855-3b5d2015ef2b
trentmkelly/LessWrong-43k
LessWrong
Can quantised autoencoders find and interpret circuits in language models? Executive Summary * I try vector-quantised autoencoders (VQ-VAEs) as an alternative compression scheme of transformer activations (as opposed to something like a sparse autoencoder). * Whilst people have danced around this idea before, discrete quantisation has only ever been tried in the actual transformer architecture itself, rather than on cached activations.  * Specifically, I train a VQ-VAE on a data of 1000 cached model activations on the indirect objection identification (IOI) task (500 "positive" examples where the model is required to do IOI, and 500 "negative" examples where it isn't).  * For each forward pass through the model (i.e. for each example), this produces a sequence of n_layers discrete integer codes, supposedly capturing the semantics of that progression of the residual stream. * I show that you can use something as simple as a categorical decision tree on these sequences of codes to extract very simple circuits from the model, and then examine the splits in the decision tree as a form of automatically interpreting this circuit. * I confirm that the "circuit" we found is quite close to the IOI circuit originally found in GPT-2 small, particularly since the whole process was unsupervised! Despite my current setup being very simplistic, I think that decision trees over these discrete codes can be a powerful tool for automated circuit discovery and interpretation. * I extend the decision-tree circuit-finder/interpreter to the high-dimensional continuous residual streams. * Finally, I briefly discuss how to make this whole process a bit more robust and how it might scale to more complex tasks and automated workflows. All code for the VQ-VAE can be found here. All code for sampling can be found here. Introduction Mechanistic interpretability has recently made significant breakthroughs in automatically identifying circuits in real-world language models like GPT-2. However, no one has yet been able to automatically interpret these cir
04ad2534-ba3d-417a-a55f-3b12ed21a4f2
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Is there much need for frontend engineers in AI alignment? How important is visualization w.r.t. the interpretability (or broader alignment) problem? Specifically, is there need+opportunity for impact of frontend engineers in that space? Additional context: I’ve got 10 years of experience in software engineering, most of which has been on frontend data visualization stuff, currently at Google (previously at Microsoft). I looked around at some different teams within Google and saw Tensorboard and the Learning Interpretability Tool, but it’s unclear to me how much those teams are bottlenecked by visualization implementation problems vs research problems of knowing where/how to even look, and I’d like to have more background before I cold-call them directly I've started to get burned out by the earning to give path and am currently considering semi-retirement to focus on other pursuits, but if there’s somewhere I can contribute to alignment without needing to go back for a PhD that would be perfect (I have been eagerly studying ML on the side though)
f80061cd-1dfd-4cb9-a4a4-9b9773136538
trentmkelly/LessWrong-43k
LessWrong
What is your recommended statistics textbook for a beginner? I have done a basic undergrad probability course (which used Ghahramani’s book.). My own university uses Wonnacott for statistics. (Which seems a tad dated.) What’s your recommendation? I want to use the statistics eventually for ML. Please state your reasons for the recommendation also.
1e15f4d3-7270-4981-bf76-57979a266648
StampyAI/alignment-research-dataset/blogs
Blogs
Import AI 332: Mini-AI; safety through evals; Facebook releases a RLHF dataset Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. [Subscribe now](https://importai.substack.com/subscribe) **Facebook shows how human feedback data can help it improve language model performance:***…BlenderBot 3x made possible by thousands of people trolling the first BlenderBot…*Facebook has published details on BlenderBot 3x, a large language model it developed using conversation data gathered from the public deployment of an earlier language model named BlenderBot. Of greater interest is the dataset Facebook is releasing along with the study - interaction data from BlenderBot, containing  ~350,000 conversations with more than ~6.2 million utterances, as well as ~155,000 instances of feedback data where people voted on how good or bad different language model responses were.  **What does human feedback get you**? The research is a nice study in the value of human feedback - when BlenderBot was deployed a bunch of people tried to break it in a bunch of ways, giving Facebook a dataset it could use to train a language model that was more resilient to these breaks. During the initial BlenderBot rollout, "around 70% of participants conducted a wide range of reciprocal conversations (which we refer to as “standard conversations”), while the other 30% of conversationalists conducted either adversarial conversations or sent toxic messages (termed “adversarial conversations”)", Facebook wrote.     Facebook used this data to develop reward models to use to train BlenderBot 3x on reinforcement learning from human feedback. "Our new model outperforms its predecessor with 94.4% of BlenderBot 3x’s responses evaluated as good, compared to 85.3% for BlenderBot 3. Overall, BlenderBot 3x is shown to produce both better responses on average and safer responses than BlenderBot 3 in challenging situations". **Missing ablation - synthetic data:** It would have been nice to see Facebook try to generate some fully synthetic feedback datasets to train the system on. As it stands, this study shows us that having *additional data that embodies user interactions (both genuine and adversarial) is useful* but it doesn't really let us know if that data needs to be 'real' (as in, gathered from genuine human interactions), or if it can be AI generated.  **Read more:** [Improving Open Language Models by Learning from Organic Interactions (arXiv)](https://arxiv.org/abs/2306.04707).          **Find out more about the data here:** [BlenderBot 3x 175B data card (Facebook AI Research, GitHub)](https://github.com/facebookresearch/ParlAI/blob/main/projects/bb3x/data_card.md). **####################################################** **Cohere - here's how to train AI systems that are easier to miniaturize:***…LLM developer publishes a quantization cookbook…*Researchers with AI company Cohere have published an analysis of how choices made during the training of AI systems can influence how easy fully trained systems are to quantize. Quantization is where you take the neural net weights which are natively stored as 32-bit and shrink them down to 16-bit or 8-bit integers - doing this produces significant reductions in the memory requirements and latency of trained models, so using lower precision helps with deployment of trained models.  **What matters:** Cohere finds that "it is possible to optimize for a quantization friendly training recipe that suppresses large activation magnitude outliers," they write. "This leads to a distribution of activations and weights that are more amenable to simple INT8 quantization recipes and does not necessitate the need for complex and inefficient mixed-precision computations. Our results show that we can introduce simple INT8 post-training quantization with negligible impact on performance due to choices we make during the pre-training stage." They validate their approach on models which range from 410 million to 52 billion parameters.  **Important things for quantization:** The paper identifies three important things for training models in such a way that there's less of a penalty to quantizing them:  * **Weight decay**: "A higher level of weight decay during pre-training improves post-training quantization performance". * **Dropout:** "Higher levels of dropout correspond to sharper degradation in post-training quantization." * **Gradient clipping:** "Gradient clipping shows a positive impact on the quantization performance, improving robustness to post-training quantization." **Why this matters - all that was once expensive becomes cheaper:** Refinement of things like quantization is part of the broader 'industrialization of AI' - systems that were once very expensive and mostly built out of artisanal knowledge are now becoming more widely understood, letting companies like Cohere investigate and publish training cookbooks to create more efficient systems.     "We believe our results present an impactful formula for training models which are inherently easier to quantize at scale, making these models more accessible for deploying in a variety of deployment environments," they write. **Read more:** [Intriguing Properties of Quantization at Scale (arXiv)](https://arxiv.org/abs/2305.19268). **$$$$$$$$$$$$$$** **Cohere raises $270M:**In other Cohere news, the company recently raised $270m in Series C funding. Participants included NVIDIA, Oracle, Salesforce Ventures, Index Ventures, and more. **Read more:** [Cohere Announces $270M Series C to Bring Generative AI to Enterprises (Cohere blog)](https://txt.cohere.com/announcement/). **####################################################** **Facebook makes an awesome AI music generator - and releases it!***…There are lots of AI music generators, but relatively few have been released, until now…*Facebook has built MusicGen, an AI model that "can generate consistent music with a single-stage language model through an efficient codebook interleaving strategy". Along with publishing the research, Facebook has also taken the unusual step of releasing the model as well. **What they did:** They trained some transformer-based models at 300M, 1.5B, and 3.3B parameter sizes on around ~20,000 hours of licensed music. The music was made up of "an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data collections, with respectively 25K and 365K instrument-only music tracks", Facebook wrote. In tests, their models outperformed Google's 'MusicLM', which is an extremely good proprietary music model, as well as outperforming more broadly accessible models such as Riffusion and Mousai. **Amazing samples:** To get a feel for the model, check out the samples at the [research paper website](https://ai.honu.io/papers/musicgen/) - qualitatively, it sounds better than other models out there (both proprietary and open ones). The 'melody conditioning' stuff is especially interesting - take in a few seconds of audio and output some music in an arbitrary style as specified by text. **Why this matters - release as a differentiator:** Facebook is also releasing the model. This stands in contrast to Google which said, at the time of publishing its then best-in-class MusicLM, "we have no plans to release models at this point" ([#316](https://jack-clark.net/2023/01/30/import-ai-316-scaling-laws-for-rl-stable-diffusion-for-160k-yolov8/)). By contrast, Facebook thinks releasing its models in the open is a good thing - ". Open research can ensure that all actors have equal access to these models," Facebook writes. "Through the development of more advanced controls, such as the melody conditioning we introduced, we hope that such models can become useful both to music amateurs and professionals."    **Find out more at the research website:** [MusicGen: Simple and Controllable Music Generation](https://ai.honu.io/papers/musicgen/). **Get the code and models:** [Audiocraft (Facebook Research)](https://github.com/facebookresearch/audiocraft). **Read the paper:** [Simple and Controllable Music Generation (arXiv)](https://arxiv.org/abs/2306.05284). **####################################################** **Hundreds of AI experts say stopping AI killing everyone should be a priority:***…A simple statement of values? Yes. Evidence of broad concern? Yes…*More than 200 AI researchers and entrepreneurs, including the CEOs of Anthropic, DeepMind, and OpenAI, have signed on to a statement saying the risk of extinction of AI should be a global priority.  **The statement in full:** "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." **Why it matters - if all these people are concerned, so should you:** Some people say this statement mostly is a marketing tool serving the interests of people who want to make money - while that may be true, it's kind of hard to square with how statements like this will increase government attention on AI and therefore increase friction into its development. It's kind of hard to imagine the CEOs of oil companies or tobacco companies writing equivalent statements like 'mitigating the risk of climate change from oil prediction should be a global priority" or "mitigating the risk of cancer from cigarette smoking should be a global priority" at the state of their respective industries. **Why I didn't sign it:** Careful readers may note a bunch of people from Anthropic signed it - I didn't sign it because I figured by having a bunch of senior leadership sign it that'd clearly telegraph our institutional view and, on a personal level, I have something of an aversion to signing onto stuff, preferring instead to write about it here.  **Read more:** [Statement on AI Risk (Center for AI Safety)](https://www.safe.ai/statement-on-ai-risk). **####################################################** **"AI evals" organization launches to try to make AI companies safer:***…Most AI governance and AI policy interventions require good evaluations - Apollo wants to help with this…*A new AI research organization called Apollo Research has launched and its goal is to improve the safety of AI companies through a) developing evaluations for unsafe AI behaviors, and b) conducting its own research into AI interpretability.  **Evals as the key to AI policy:** Apollo describes itself as an "evals research org". This is a pretty interesting goal and characterization and it highlights the central challenge of AI governance - if we want to reduce the downsides of AI systems, we need to be able to test AI systems for harmful or unsafe properties. Apollo says its goal is to figure out evals that help break down the problem of AI deception "into fundamental components and prerequisites from which we aim to build an informative evaluation suite".  **Why evals matter:** Of course, having an evaluation doesn't just magically mean you can get it to be run - to that end, Apollo says it intends "to use our research insights and tools to serve as a third-party external auditor for the frontier models of AGI labs, reducing the chance that deceptive AIs are developed and deployed…We also intend to engage in AI governance, e.g. by working with relevant policymakers and providing technical expertise to the drafting of auditing regulations." **Solving the 'chicken and egg' problem of AI regulation:** AI policy has a major chicken&egg problem; to get safer AI systems you need to be able to mandate that advanced systems get evaluated for safety, but currently few of these evaluations exist, making ti hard for policymakers to wire specific evals into policy. Organizations like Apollo may solve this chicken&egg impasse by creating evaluations which are useful and can therefore be used to move the AI governance question forward. Good luck to them! **Read more**: [Announcing Apollo Research (Apollo Research, blog)](https://www.apolloresearch.ai/blog/announcement). #################################################### **Tech Tales:** **Patch notes for MIRRORMIRROR v4.6  Disclaimer:** As mandated by the Sentience Accords all Provably Conscious Entity (PCE) software updates ship with a System Disclosure. To use the PCE software you must read this System Disclosure and acknowledge you are familiar with the contents.  **PCE improvements:** * Session diffs: The PCE will better model the user across different sessions and will compose richer backend psychographic profiles to allow for more intuitive interaction. * Context scratchpads: PCEs can now share user data across themselves via context window partitioning (if user consents). * Adaptive Intelligence: PCE will scale its superficial intellect in relation to what puts the user most at ease. **Bug fixes:** * Fixed 'fogbankspectre' token injection attack. * System will allow user-escape from lengthy conversation threads rather than 'sell' the user on continuation. * Fixed stall bug that occurred during conversations with 100+ turns or more. **Known regressions:** * Persuasion performance has been reduced across the board. * Farsi humor has decreased; cause unknown. * In some circumstances the PCE will insist that "God is real" and endeavor to proselytize to the user. **Things that inspired this story:** The strange process of developing LLMs; how AGI may end up being software and software always gets updated; patch notes enroute to the singularity, the Sentience Accords, Provably Conscious Entities, and so on. Import AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
7536abea-ad2b-4d4d-a3e1-32e9c45e8011
trentmkelly/LessWrong-43k
LessWrong
How can I restore my previous account? I'm now logged in, for the first time since the new LessWrong has been operating. Ages ago my old account (Stirling_Westrup) simply stopped working, and I couldn't log in to post a message saying there was a problem. I ended up making a second account, but never got any responses to my complaints that my main account was broken. On a whim I tried to log in now, and was told it was a legacy account. So, here I am. But is there any way to get back my original account Stirling_Westrup?
ff04757b-fa8c-4b0a-96ba-f8065af9180f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Hypotheses about Finding Knowledge and One-Shot Causal Entanglements *Epistemic status: my own thoughts I've thought up in my own time. They may be quite or very wrong! I am likely not the first person to come to these ideas. All of my main points here are just hypotheses which I've come to by the reasoning stated below. Most of it is informal mathematical arguments about likely phenomena and none is rigorous proof. I might investigate them if I had the time/money/programming skills. Lots of my hypotheses are really long and difficult-to-parse sentences.* > What is knowledge? > > I think this question is bad. It's too great of a challenge. It asks us (implicitly) for a mathematically rigorous definition which fits all of our human feelings about a very loaded word. This is often a doomed endeavour from the start, as human intuitions don't neatly map onto logic. Also, humans might disagree on what things count as or do not count as knowledge. So let's attempt to right this wrong question: > Imagine a given system is described as "knowing" something. What is the process that leads to the accumulation of said knowledge likely to look like? > > I think this is much better. We limit ourselves to systems which can definitely be said to "know" something. This allows us to pick a starting point. This might be a human, GPT-3, or a neural network which can tell apart dogs and fish. In fact this will be my go-to answer for the future. We also don't need to perfectly specify the process which generates knowledge all at once, only comment on its likely properties. Properties of "Learning" ------------------------ Say we have a very general system, with parameters θ.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , with t representing time during learning. Let's say they're initialized as θ0 according to some random distribution. Now it interacts with the dataset which we will represent with X, taken from some distribution over possible datasets. The learning process will update θ0, so we can represent the parameters the parameters after some amount of time as θ(θ0; X; t). This reminds us that the set of parameters depends on three things: the initial parameters, the dataset, and the amount of training. Consider θ(θ0; X; 0). This is trivially equal to θ0, and so it depends only on the choice of θ0. The dataset has had no chance to affect the parameters in any way. So what about as t→∞? We would expect that θ∞(θ0; X)=θ(θ0; X; ∞) depends mostly on the choice of X and much less strongly on θ0. There will presumably be some dependency on initial conditions, especially for very complex models like a big neural network with many local minima. But mostly it's ω which influences θ. So far this is just writing out basic sequences stuff. To make a map of the city you have to look at it, and to learn your model has to causally entangle itself with the dataset. But let's think about what happens when ω is slightly different. Changes in the world -------------------- So far we've represented the whole dataset with a single letter X, as if it were just a number or something. But in reality it will have many, many independent parts. Most datasets which are used as inputs to learning processes are also highly structured. Consider the dog-fish discriminator, trained on the dataset Xdog/fish. The system θ∞(θ0; Xdog/fish) could be said to have "knowledge" that "dogs have two eyes". One thing this means if we instead fed it an X which was identical except every dog had three eyes (TED) then the final values of θ would be different. The same is true of facts like "fish have scales", "dogs have one tail". We could express this as follows: θ∞(θ0; Xdog/fish+ΔXTED) Where ΔXTED is the modification of "photoshopping the dogs to have three eyes". We now have: θ∞(θ0; Xdog/fish+ΔXTED)=θ∞(θ0; Xdog/fish)+Δθ∞(θ0; Xdog/fish; ΔXTED) Now let's consider how Δθ∞(θ0; X; ΔX) behaves. For lots of choices of ΔX it might just be a series of random changes tuning the whole set of θ values. But from my knowledge of neural networks, it might not be. Lots of image recognizing networks have been found to contain neurons with *specific functions* which relate to structures in the data, from simple line detectors, all the way up to "cityscape" detectors. For this reason I suggest the following hypothesis: > Structured and localized changes in the dataset that a parameterized learning system is exposed to will cause localized changes in the final values of the parameters. > > Impracticalities and Solutions ------------------------------ Now it would be lovely to train all of GPT-3 twice, once with the original dataset, and once in a world where dogs are blue. Then we could see the exact parameters that lead it to return sentences like "the dog had [chocolate rather than azure] fur". Unfortunately rewriting the whole training dataset around this is just not going to happen. Finding the flow of information, and influence in a system is easy if you have a large distribution of different inputs and outputs (and a good idea of the direction of causality). If you have just a single example, you can't use any statistical tools at all.  So what else can we do? Well we don't just have access to θ∞. In principle we could look at the course of the entire training process and how θ changes over time. For each timestep, and each element of the dataset X, we could record how much each element of θ is changed. We'll come back to this Let's consider the dataset as a function of the external world: X(Ω). All the language we've been using about knowledge has previously only applied to the dataset. Now we can describe how it applies to the world as a whole. For some things the equivalence of knowledge of X and Ω is pretty obvious. If the dataset is being used for a self-driving car and it's just a bunch of pictures and videos then basically anything the resulting parameterised system knows about X it also knows about Ω. But for obscure manufactured datasets like [4000 pictures of dogs photoshopped to have three eyes] then it's really not clear. Either way, we can think about Ω as having influence over X the same way as we can think about X as having influence over θ∞. So we might be able to form hypotheses about this whole process. Let's go back to Xdog/fish. First off imagine a change Ωnew=Ω+ΔΩ, such as "dogs have three eyes". This will change some elements of X more than others. Certain angles of dog photos, breeds of dogs, will be changed more. Photos of fish will stay the same! Now we can imagine a function Δθ(θ0; X(Ω); ΔX(Ω; ΔΩ)). This represents some propagation of influence from Ω→X→θ. Note that the influence of Ω on X is independent of our training process or θ0. This makes sense because different bits of the training dataset contain information about different bits of the world. How different training methods extract this information might be less obvious. The Training Process -------------------- During training, θ(t) is exposed to various elements of X and updated. Different elements of X will update θ(t) by different amounts. Since the learning process is about transferring influence over θ from θ0 to Ω (acting via X), we might expect that for a given element of X, it has more "influence" over the final values of the elements of θ which were changed the most due to exposure to that particular element of X during training. This leads us to a second hypothesis: > The degree to which an element of the dataset causes an element of the parameters to be updated during training is correlated with the degree to which a change to that dataset element would have caused a change in the final value of the parameter. > > Which is equivalent to: > Knowledge of a specific properties of the dataset is disproportionately concentrated in the elements of the final parameters that have been updated the most during training when "exposed" to certain dataset elements that have a lot of mutual information with that property. > > For the dog-fish example: elements of parameter space which have updated disproportionately when exposed to photos of dogs that contain the dogs' heads (and therefore show just two eyes), will be more likely to contain "knowledge" of the fact that "dogs have two eyes". This naturally leads us to a final hypothesis: > Correlating update-size as a function of dataset-element across two models will allow us to identify subsets of parameters which contain the same knowledge across two very different models. > > Therefore > Access to a simple interpreted model of a system will allow us to rapidly infer information about a much larger model of the same system if they are trained on the same datasets, and we have access to both training histories. > > Motivation ---------- I think an AI which takes over the world will have a very accurate model of human morality, it just won't care about it. I think that one way of getting the AI to not kill us is to extract parts of the human utility-function-value-system-decision-making-process-thing from its model and tell the AI to do those. I think that to do this we need to understand more about where exactly the "knowledge" is in an inscrutable model. I also find thinking about this very interesting.
0c46e5c3-0a66-4c9b-9d23-481742096174
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Updated Deference is not a strong argument against the utility uncertainty approach to alignment **Thesis:** The [problem of fully updated deference](https://arbital.com/p/updated_deference/) is not a strong argument against the viability of the assistance games / utility uncertainty approach to AI (outer) alignment. **Background**: A proposed high-level approach to AI alignment is to have the AI maintain a probability distribution over possible human utility functions instead of optimizing for any particular fixed utility function. Variants of this approach were advocated by Stuart Russell in [Human Compatible](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS) and by Hadfield-Menell et al in the [CIRL paper](https://arxiv.org/abs/1606.03137). Adding utility uncertainty intuitively seems to provide a number of safety benefits relative to having a fixed objective, including: 1. Utility uncertainty gives the AI an incentive to adjust in response to a human operator's corrective actions. 2. Utility uncertainty weakens the AI's incentive to harm its human operators, since this might result in a permanent loss of utility-relevant information. 3. Utility uncertainty incentivizes the AI to avoid irreversible changes to the state of the world, since those might lead to permanently low utility. Despite the high profile and intuitive appeal of utility uncertainty, almost none of the alignment researchers I know consider it a promising approach to AI alignment. The most common reason cited seems to be the [problem of fully updated deference](https://arbital.com/p/updated_deference/) (e.g. Richard Ngo's [alignment research exercises](https://www.alignmentforum.org/posts/kj37Hzb2MsALwLqWt/alignment-research-exercises) point to this as the reason for why CIRL doesn't solve the alignment problem). In this post I will argue why **fully updated deference should not be seen as a strong argument against utility uncertainty as approach to AI alignment**. This is not meant as an argument in favor of the uncertainty approach; it may have other irresolvable difficulties which I discuss briefly in the conclusion. **Outline:** The [Arbital post](https://arbital.com/p/updated_deference/) that seems to be the canonical reference for updated deference contains many heuristic arguments and one concrete, worked-out example in the section *Moral uncertainty and its relation to corrigibility*. I will mostly engage with the example, and argue that 1. It conflates the problem of updated deference with the independent problem of **prior mis-specification**. 2. If we remove prior mis-specification, there is no problem in the limit of increasing AI capability. **The Problem of Updated Deference** ------------------------------------ The example in the post has an AI that is uncertain between three utility functions U1.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , U2, U3 whereas the human's true utility function is V. The AI believes that the utility that will be attained in each of the three possible worlds is * ui with AI assistance * vi if the human optimizes Vwithout the AI's assistance (e.g. because the humans shut the AI down) If the AI is much more powerful than humans, the argument goes, then ui>>vi in any of the three worlds, so the AI will not let itself be shut down. The uncertainty doesn't help because the AI can choose to keep gathering information until it has *fully updated.* Since it's more powerful than the humans, it can gather that information more efficiently when it's not shut down, and therefore ignores the shutdown signal. **Factoring out prior mis-specification** ----------------------------------------- The original example has the AI system assign probability 0 to the true human utility function V, presumably because its prior probability was 0. I think any advocate of the utility uncertainty approach would agree that assigning a nonzero prior to the true human utility function is critical for the approach to work. Describing such a prior abstractly is easy (just take the Solomonoff prior over programs), implementing a real CIRL agent that reasons with such a prior could be intractably difficult, but this is clearly a separate problem from "fully updated deference". So from now on we'll assume that the human V is one of U1, U2, U3, and see if we still have a problem. **Dissolving Updated Deference** -------------------------------- The AI refuses to be shut down because it believes that it can achieve higher utility than the humans in any of the three possible worlds if the humans shut it down. The key question to consider is: **is the AI right that shutdown is bad for the true human utility function?** 1. Insofar as "shut down is bad" is a mistaken belief, we expect the problem of updated deference to dissolve as AI capabilities grow, since more capable AIs will make fewer mistakes. Note that in the original example, the plausibility of the AI's belief relies on the AI system being better at optimizing than unassisted humans, but "unassisted humans" is not likely to be the real world counterfactual. If the humans were able to deploy an AI system this powerful, *they could also deploy another AI system equally powerful and (plausibly) more aligned*. In other words, ui≤vi with very high probability, contrary to assumption. So the AI will shut down unless it expects the humans to do something irreversibly bad after shutting it down, which brings us to: 2. Insofar as "shut down is bad" is a correct belief, *there is no problem* - even a fully aligned superintelligence should be expected to resist shutdown if it believed this would lead to a permanent and irreversible loss of utility to humans. This could happen e.g. if the AI was confident that the humans would deploy a catastrophically unaligned AI on the next iteration, or if it believed humans would permanently curtail their technological potential. In other words, **in this very unusual scenario where humans are about to make a catastrophic mistake, hard corrigibility and alignment are at odds**. I don't think this scenario will happen, but if it does I think it's clear we should choose alignment over corrigibility. A counter-argument to 1 would be that it is very possible for an AI system to be extremely capable but still have mistaken beliefs. This could be because [there is an error in its source code](https://arxiv.org/abs/1709.06275); but this objection applies to almost any alignment approach. A more serious objection to the utility uncertainty agenda is that **truth-seeking is anti-competitive** and we will by default select models more for their ability to take impactful actions than for their ability to have true beliefs about the world. In fact, the core argument of *Human Compatible* is thatwe should work on *differentially* improving our models' ability to reason about uncertainty relative to their ability to optimize over actions. It may be that this a good strategy in theory but too hard in practice (it imposes too much of an alignment tax) but that argument should be made explicitly and it has little to do with updated deference. Conclusion ---------- So what does this tell us about whether utility uncertainty is a promising approach to AI alignment? Not much. I do think the "problem of updated deference" is better understood as a combination of prior mis-specification and competitiveness penalties from maintaining well-calibrated true beliefs. But I basically agree with [Rohin](https://www.alignmentforum.org/posts/Pkr97mB9Y4rkx5DdZ/utility-uncertainty-vs-expected-information-gain?commentId=FGSgRJdrewhEGXRW9) that utility uncertainty smuggles all the complexity of alignment into 1. creating a "reasonable" prior over reward functions 2. creating a "reasonable" human model 3. finding a computationally tractable way to do Bayesian inference with (1) and (2) and it's not obvious whether this is actual conceptual progress, or a [wrong-way reduction](https://metarationality.com/wrong-way-reduction#:~:text=A%20wrong%2Dway%20reduction%20is,original%20one%E2%80%94or%20even%20impossible!) of a hard problem into an impossible problem.
363b953d-ec00-4b8c-a2ec-dceed367999b
trentmkelly/LessWrong-43k
LessWrong
The Calculus of Nash Equilibria Now that we know a bit about derivatives, it's time to use them to find dominant strategies and Nash equilibria. It helps if the reader is familiar with Nash equilibria already. Prisoner's dilemma The payoff matrix of the Prisoner's dilemma can be as follows: We can see that the payoff for Prisoner 1 depends on her own action (Cooperate/Defect) but also on the action of Prisoner 2. Therefore, the payoff function for Prisoner 1 is a multivariable function: V1(a1,a2), where an is the action of Prisoner n (and n∈{1,2}). Let's say an=0 when the action of Prisoner n is Cooperate, and an=1 for Defect. So an∈{0,1}. Then V1(a1,a2)=20−20a2+10a1, and crucially, V1′a1(a1,a2)=10. So for Defect (a1=1), Prisoner 1's payoff will be 10 higher than for Cooperate (a2=0), as can be confirmed in the table. Note that a2 doesn't show up in V1′a1(a1,a2): Defect gives $10 more for Prisoner 1 regardless of what Prisoner 2 does, which makes Defect a dominant strategy. Don't get me wrong: Prisoner 1's payoff certainly does depend on what Prisoner 2 does. The point is that no matter what Prisoner 2 does, Prisoner 1's payoff will be $10 higher when she (Prisoner 1) defects - and that's what's reflected in V1′a1(a1,a2)=10. Since the payoff matrix is symmetrical, V2(a1,a2)=20−20a1+10a2 and V2′a2(a1,a2)=10. Prisoner 2 therefore also has a dominant strategy: Defect. The Prisoner's dilemma, then, has a Nash equilibrium: when both prisoners defect. With the partial derivatives, we demonstrated that when both prisoners defect, no one prisoner can do better by changing her action to Cooperate. If e.g. Prisoner 1 were to do this, then a1 would go from 1 to 0, and since V1′a1(a1,a2)>0, that would lower V1 (regardless of a2). By symmetry, the same is true for Prisoner 2. Nonlinear payoff functions In the Prisoner's dilemma, the payoffs of both players (prisoners) can be modelled by linear payoff functions. What if the payoffs are nonlinear? Let's say V1(a1,a2)=−a21 and V2(a1,a2)=−a22+a2. Then V1′a1
bf75bc0f-e548-4b9f-a2c7-a0e4f52faa18
trentmkelly/LessWrong-43k
LessWrong
From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments
bb468bce-4549-41f5-baca-c84c03327b52
trentmkelly/LessWrong-43k
LessWrong
New LW Meetup: Lyon This summary was posted to LW Main on June 28th. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * [Lyon, France] LW Meetup in Lyon: 03 July 2013 06:00PM Other irregularly scheduled Less Wrong meetups are taking place in: * Brussels meetup with HEALES: 13 July 2013 01:00PM * Frankfurt meetup: 30 June 2013 04:30PM * Israel LW meetup: 04 July 2013 07:00PM * [Moscow] The Goals We Set: 07 July 2013 04:00PM * [Munich] LW Munich Meetup in July: 06 July 2013 03:00PM * San Francisco: Effective Altruism: 28 June 2013 07:54PM * [Vienna] LW Vienna Meetup #4: 13 July 2013 03:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX: 29 June 2019 01:30PM * London Practical - Sunday 7th July: 07 July 2013 02:00PM * Melbourne LW Outing: Astronomy evening in Eltham, Saturday 29th June, 5:30pm: 29 June 2013 05:30PM * Melbourne LW Outing: Indoor Rock Climbing, Sunday June 30th: 30 June 2013 02:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Salt Lake City, Seattle, Toronto, Vienna, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of
e1d1a89d-5f88-4ca6-860b-da45cd3b5723
trentmkelly/LessWrong-43k
LessWrong
Containing the AI... Inside a Simulated Reality So I just finished the paper by Yampolskiy called "Uncontrollability of AI" and it makes for a compelling read. In particular, I was happy to finally see something that explicitly mentions the ludicrous folly of believing it possible to make an AI conform to "human values" - as many posts on this blog make abundantly clear, to be human is to be irrational... asking an AI to conform with our ways of "reasoning" is... well - incoherent, to put it mildly. But - that is not what this post is about :) I wish to propose a containment method that for some reason has not been especially elaborated on. Some might say it's another version of AI-in-a-Box, but I disagree. Allow me to explain... What if the AGI we create is "brought online" inside a simulated reality... A place that, as far as it knows, is the entirety of the world? Let us call this place AISpace.  Now some of you probably are already pre-heating your keyboards to respond with the often repeated (and valid) arguments that "prove" how this won't work, but let me add a little twist first... as some of you may agree, we have no definitive proof that our world itself is not a simulated reality. Thus, if you feel it inevitable that an AGI must be able to get out of AISpace, releasing an AGI into this world would have to at least leave open the possibility that an AGI on Real Earth (aka Reality as we think we know it) could also "conclude" this actual world is a sim, or decide to find out if it is (and, yes, convert the entire universe into a computer to come up with an answer ;) If we are unable to definitively settle whether Real Earth is or is not a simulation, why should an AI be able to do so? Now - of course the above requires a few conditions, some of which may indeed be hard to meet, such as human operators not exposing the fact that AISpace is not in fact all of Reality... and (malevolent or indifferent) actors could always choose to release their AI into the Real World anyhow. What I'd like us to do her
9152f7a6-c7d6-4a88-b63f-7f64bc0913a5
StampyAI/alignment-research-dataset/special_docs
Other
active reward learning from multiple teachers. Active Reward Learning from Multiple Teachers Peter Barnett1,*, Rachel Freedman1, Justin Svegliato1and Stuart Russell1 1Center for Human-Compatible AI, University of California, Berkeley, CA 94720, USA Abstract Reward learning algorithms utilize human feedback to infer a reward function, which is then used to train an AI system. This human feedback is often a preference comparison, in which the human teacher compares several samples of AI behavior and chooses which they believe best accomplishes the objective. While reward learning typically assumes that all feedback comes from a single teacher, in practice these systems often query multiple teachers to gather sufficient training data. In this paper, we investigate this disparity, and find that algorithmic evaluation of these different sources of feedback facilitates more accurate and efficient reward learning. We formally analyze the value of information (VOI) when reward learning from teachers with varying levels of rationality, and define and evaluate an algorithm that utilizes this VOI to actively select teachers to query for feedback. Surprisingly, we find that it is often more informative to query comparatively irrational teachers. By formalizing this problem and deriving an analytical solution, we hope to facilitate improvement in reward learning approaches to aligning AI behavior with human values. Keywords Reward Learning, Active Learning, Preference Learning, Value of Information 1. Introduction Standard AI and machine learning algorithms require the designer to specify a cost or reward function. This objec- tive incentivizes desired behavior and penalizes mistakes, teaching the system how to perform the task. While such objectives are easy to manually specify for prob- lems with clear win conditions, such as games [ 1,2,3] and tasks with clear goals, such as image classification [ 4,5], they can be challenging to formalize for more nuanced tasks [ 6]. For example, Lee et al. [7]find that humans struggle to define an objective that incentivizes bipedal locomotion, despite being experts in both machine learn- ing and walking. By incentivizing incorrect behavior, misspecified objectives can lead to useless or even dan- gerous outcomes [ 8]. Ensuring that AI systems optimize objectives that align with our own is a crucial part of building safe and beneficial AI. Reward learning techniques enable AI systems to learn their objectives by observing and interacting with hu- mans instead of requiring their designers to specify these objectives manually [ 9]. Humans can train reward learn- ing systems using a variety of feedback modalities, in- cluding demonstrations [ 10,11,12], pairwise compar- isons [ 7,13,14], natural language [ 15], numeric val- ues [ 16], corrections [ 17], and proxy rewards [ 18,19]. Reward learning from pairwise comparisons in particu- lar has proven remarkably effective across a variety of tasks, including complex physical maneuvers for con- tinuous control systems [ 7,14] and text summarization SafeAI 2023, The AAAI Workshop on Artificial Intelligence Safety, Feb 13–14, 2023, Washington, D.C. /envel⌢pe-⌢penpeterbarnettnz@gmail.com (P. Barnett) ©2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedingshttp://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org)for language language models [ 20,21]. In the future, it may even be possible to use reward learning to train AI systems to assist humans in researching safe AI [8, 22]. However, to infer reward functions from human feedback, reward learning systems must model human decision-making, and incorrect human decision-making models often leads to poor inference [ 23,24,25]. More- over, reward learning systems typically assume that all feedback comes from a single distribution or teacher, de- spite querying multiple teachers to generate sufficient feedback. However, humans often vary in their expertise, focus, and intelligence, affecting the noisiness of their feedback. The practice of conflating all feedback implic- itly disregards the differences between different teachers, increasing the likelihood of human model misspecifica- tion and the limitations of reward learning [26]. In this work, we extend reward learning to take ad- vantage of differences between teachers. We develop a Bayesian reward learning algorithm that actively selects which teacher to query based on the noisiness of their feedback and the learner’s current belief. We find that querying a lessrational teacher can often be more in- formative than querying a more rational teacher, since teacher mistakes inform the agent of the relative values of alternatives. For example, imagine that two teachers are comparing two alternatives, 𝐴and𝐵.𝐴is worth more than𝐵, but only slightly. If the first teacher is perfectly rational, they will always select 𝐴over𝐵. The learner can infer from this that 𝐴is preferable to 𝐵, but has no way to learn how significant the distinction is. However, assume that the second teacher is somewhat less ratio- nal, and occasionally mixes up alternatives of similar value. Then they will typically choose 𝐴, but sometimes choose 𝐵, and this allows the learner to infer that the gap between 𝐴and𝐵is small. Section 3 formalizes this rationality model and inference procedure. The rest of the paper is as follows. In Section 2, we dis- cuss prior work on reward learning, active learning, and human modeling. In Section 3, we describe the mechan- ics of reward learning, including the model of human rationality and the metrics that will be used to measure the value of information (VOI) of teacher feedback. In Section 4, we propose a teacher selection algorithm that selects which teacher to query for feedback at each time step based on the modeled rationality of each teacher and the learner’s belief distribution over the reward function. In Sections 5 and 6, we present theoretical and empirical results, showing that the learner’s belief will eventually converge to the true reward function under the teacher selection algorithm, that querying less rational teachers can often be more informative, and that our teacher selec- tion method outperforms simple heuristics like always querying the most rational teacher. By formalizing the problem of learning from multiple teachers and deriving an analytical solution, we hope to facilitate improvement in reward learning approaches to value alignment. 2. Related Work Reward Learning Reward learning techniques allow AI systems to learn reward functions by observing or interacting with humans. For example, inverse reinforce- ment learning agents observe human behavior or policies, and then infer an underlying reward function that the be- havior optimizes [ 10,11,12]. Recent advances in reward learning have focused on learning from preference com- parisons. Here, human teachers observe paired samples of system behavior, then choose which sample they prefer out of each pair. The system learns a reward model that maximizes the likelihood of these preferences, then uses that model to generate a reward signal to guide its behav- ior. This technique has been successfully applied to many domains, from continuous control [ 7,14] to language generation tasks [ 20,21]. Reward learning can also use a variety of other feedback modalities, including preference comparisons [ 7,13,14], natural language [ 15], numeric values [ 16], corrections [ 17], and proxy rewards [ 18,19], but we focus on preference comparisons in this paper due to its recent success. Active Reward Learning Human feedback is expen- sive and time-consuming to generate, so reward learn- ing algorithms must learn efficiently from limited data. They do this in part by actively selecting the queries that are sent to human teachers in order to maximize the ex- pected VOI of human feedback. Sadigh et al. [13]assume that the system is a Bayesian learner, actively synthesiz- ing queries that maximize the expected volume removed from the learner’s posterior. Bıyık and Sadigh [27] de- Figure 1: Our active reward learning approach. velop efficient approximations to this method and show how to integrate active query selection and reward learn- ing in practice. Lee et al. [7]take a different approach, empirically evaluating various heuristic strategies for query selection and finding that uncertainty-based sam- pling methods tend to perform the best. However, all of this previous work focuses on choosing which queries to send to the teachers. In this paper, we instead consider which teachers to send these queries to. Human Modeling To infer reward functions, AI sys- tems must model the behavior of humans. Early work on reward learning assumed that human behavior was per- fectly rational and that human teachers always chose the alternative that maximized their reward [ 10]. Later work models human behavior as pedagogic [ 24], systematically biased [ 28], and noisily or Boltzmann-rational [ 9,12]. We will follow recent work on learning from human preferences [ 7,9,12,14] and model human teachers as Boltzmann-rational, making choices according to a well- known probability model specified later in the paper. 3. Active Reward Learning In this section, we formalize the problem of selecting the most informative teacher to query in order to gradually learn the correct reward model. In particular, we are interested in greedily selecting the teacher to query at each time step such that the reward model of the agent efficiently converges to the correct reward model. At a high level, the teacher selection problem begins with a set of items or trajectories to compare, along with a set of human teachers to evaluate those comparisons. The human teachers each have a different level of rationality that is known a priori , meaning that the probability of a given human teacher making a mistake by preferring a less valuable item over a more valuable item is known in advance. During each time step of our approach depicted in Figure 1, two items are sampled from the set of items (Step 1 ) and then a human teacher is selected to be queried based on these items and the current belief about the reward model ( Step 2 ). The human teacher is asked which of the two items they prefer ( Step 3 ), and their preference is used to update the reward model ( Step 4 ). This process of selecting a query and a teacher is repeated until the reward model converges to the correct reward model. Query selection is the problem of choosing which items to present to the teacher [7]. Some approaches to query selection include choosing the pair of items for which the preference predictors are most uncertain [ 7,14]. Other approaches to query selection include selecting the pair of items that ensure that the space of queries is well cov- ered. Finally, there are more active methods that actively synthesize queries in order learn more efficiently [ 13,29]. Since our focus is on teacher selection rather than query selection, for the purposes of our analysis we will assume that queries are sampled uniformly at random. However, existing methods for query selection can be easily com- bined with our teacher selection algorithm to further improve reward learning. To formalize the problem of teacher selection, this sec- tion proceeds as follows. We (1) provide a representation of items and rewards, (2) apply a well-known model of human rationality to our problem, (3) offer a method for updating belief distributions that uses preference compar- isons from a human teacher, and (4) propose two metrics that measure the correctness of a belief distribution. Representing Items and Rewards Intuitively, each item can be represented as a set of features. For example, a book could be described by the number of pages and the number of positive reviews or a maneuver made by a self-driving car could be described by its position and distance from other vehicles at each time step. Hence, each item 𝑖can formally be represented by a feature vector 𝜑𝑖∈R𝑑where 𝑑is the number of features that describe the 𝑖th item. Given this representation of an item, the reward 𝑅(𝑖) for an item 𝑖can be expressed as a dot product between the feature vector 𝜑𝑖and the weight vector w∈R𝑑for the reward model that is being learned: 𝑅(𝑖) =w⊤𝜑𝑖. (1) If the items cannot be expressed by a feature vector, this approach can still be used by treating the feature vector 𝜑𝑖as a one-hot vector: given the 𝑖th item, the 𝑖th entry of the feature vector 𝜑𝑖would be 1and every other entry would be 0while the 𝑖th entry of the weight vector w would be the reward 𝑅(𝑖)for the 𝑖th item. During reward learning, the human teacher is pre- sented with two items and the probability of the human choosing one item over another item depends on the dif- ference in reward between the two items at hand. We therefore express the difference in the reward between two items 𝑖and𝑗as the equation 𝑅(𝑖)−𝑅(𝑗) =w⊤(𝜑𝑖−𝜑𝑗) =w⊤𝜙𝑖𝑗, (2)where 𝜙𝑖𝑗=𝜑𝑖−𝜑𝑗is the difference in the feature vectors of the two items. Modeling Human Rationality Human teachers can be represented as Boltzmann-rational agents following a large body of existing work on reward learning [ 7,9, 12,14,30,31,32,33,34]. Moreover, we assume that each teacher has a different known rationality parameter 𝛽 rather than assuming 𝛽= 1for all teachers. Boltzmann- rational teachers are more likely to choose the higher reward item if they are “more rational" (i.e., a higher 𝛽), or if the difference in reward between the two items is greater. The probability that the teacher chooses an item 𝑖over and an item 𝑗is given by 𝑃(𝑖≻𝑗;𝛽) =exp(𝛽𝑅(𝑖)) exp(𝛽𝑅(𝑖)) + exp( 𝛽𝑅(𝑗)).(3) We thus model the human choice probabilistically: 𝑃(𝐼|w;𝜙𝑖𝑗, 𝛽) =1 1 + exp(−𝐼𝛽w⊤𝜙𝑖𝑗), (4) where 𝐼= +1 if the human prefers item 𝑖over item 𝑗 and𝐼=−1if the human prefers item 𝑗over item 𝑖. This reflects the difference in value of the two items but not their absolute value. Equation 4 is a logistic model of the probability of the human preference 𝐼, where 𝛽de- termines the slope. As the difference in reward between the two items increases, the probability that the teacher chooses the higher reward item approaches 1. Updating Belief Distributions The goal of reward learning is to learn the weight vector wof the reward model. Given the preference of a teacher 𝐼, the difference in feature vectors 𝜙𝑖𝑗, and the teacher’s rationality pa- rameter 𝛽, the learner updates its belief over the weights of the reward model. That is, the belief over the weights of the reward model is updated such that the reward model now predicts that the item selected by the teacher is more valuable than it was prior to the belief update. Formally, we begin with the current belief distribution 𝑃(w), which we treat as the prior distribution, and up- date it according to Bayes’ theorem in the following way: 𝑃(w|𝐼;𝜙𝑖𝑗, 𝛽) =𝑃(𝐼|w;𝜙𝑖𝑗, 𝛽)𝑃(w)∫︀ 𝑃(𝐼|w′;𝜙𝑖𝑗, 𝛽)𝑃(w′)𝑑w′,(5) where 𝑃(𝐼|w;𝜙𝑖𝑗, 𝛽)is given by Equation 4. Measuring Belief Distribution Error After query- ing a teacher and updating the belief over the weights of the reward model w, the belief distribution can be evaluated on a metric that measures the “correctness” or the distance of this belief distribution to the true belief distribution. Here, we consider two such metrics: the Table 1 The general form of an expected metric ℳalong with the expected metrics for mean squared error (MSE) and log loss (LL). Expected Metric Equation Ew∼𝑃w 𝐼∼𝑃𝐼|w[︀ ℳ(𝑃w|𝐼,w;𝜙𝑖𝑗, 𝛽)]︀ ∫︀ 𝑃w∑︀ 𝐼𝑃𝐼|wℳ(𝑃w|𝐼,w)𝑑w Ew∼𝑃w 𝐼∼𝑃𝐼|w[︀ MSE( 𝑃w|𝐼,w;𝜙𝑖𝑗, 𝛽)]︀ 2∑︀ 𝐼2∫︀𝑓𝐼(w)𝑑w×[︁∫︀ 𝑓𝐼(w)𝑑w∫︀ 𝑓𝐼(w)‖w‖2𝑑w−⃦⃦∫︀ 𝑓𝐼(w)w𝑑w⃦⃦2]︁ Ew∼𝑃w 𝐼∼𝑃𝐼|w[︀ LL(𝑃w|𝐼,w;𝜙𝑖𝑗, 𝛽)]︀ −∑︀ 𝐼∫︀ 𝑓𝐼(w) log(︁ 𝑓𝐼(w)∫︀𝑓𝐼(w′)𝑑w′)︁ 𝑑w mean squared error ( MSE ) and the log loss ( LL). The MSE measure represents how “far away” the belief dis- tribution is from the true value while the LLmeasure represents the height of the belief distribution at the true value. In both cases, a lower score indicates a more accu- rate distribution. Using 𝑄(w)as the belief distribution over the weight vector wandwtrueas the true weight vector, the MSE andLLmeasures are given as follows. MSE( 𝑄(w),wtrue) =∫︁ 𝑄(w)||w−wtrue||2𝑑w(6) LL(𝑄(w),wtrue) =−log(𝑄(wtrue)) (7) Note that we will describe a greedy approach that selects the teacher that in expectation leads to our belief distri- bution scoring the best on one of these metrics after a single update in the next section. Work on active learning from human preferences uses volume removal (i.e., removing as much of the integral of the unnormalized distribution as possible) as a metric [ 13, 27,33]. However, this may not be an appropriate metric for teacher selection. This is because a larger Boltzmann rationality parameter 𝛽results in a larger volume of the belief distribution being removed but may not necessarily lead to a more accurate belief distribution. 4. Teacher Selection We propose a method for selecting and querying the teacher that produces the best immediate improvement in the expectation of a given metric, which approximates the expected VOI of the teacher feedback. The metrics evaluate how similar the posterior belief is to the ground truth reward, so lower scores indicate improvements in the learned reward model. The algorithm considers un- certainty over two variables: the ground-truth parameter- ization of the reward model and the item from the query that the teacher prefers. In particular, the expectation of the metric must be taken over the current belief distri- bution 𝑃(w)and the probability 𝑃(𝐼|w;𝜙𝑖𝑗, 𝛽)of the teacher preferring each item. Formally, we express the expectation of a given metric ℳin Table 1. Note that weuse the notation 𝑃w=𝑃(w),𝑃𝐼|w=𝑃(𝐼|w;𝜙𝑖𝑗, 𝛽), and𝑃w|𝐼=𝑃(w|𝐼, 𝜙𝑖𝑗, 𝛽)throughout this section. Importantly, the expected value of a given metric only depends on the known variables 𝜙𝑖𝑗and𝛽along with the current belief distribution 𝑃wgiven a straightfor- ward substitution of Equations 4 and 5. This enables our method to calculate the expected value of the metric for a given teacher with the rationality parameter 𝛽. This will be used to find the teacher to query at each time step: the teacher with the lowest metric in expectation should be selected as that would result in a weight vector that is closest to the true weight vector in expectation. Finally, given the general form of an expected met- ric, Table 1 defines the expectations of the MSE and LL metrics using the function 𝑓𝐼(w) = 𝑃w/(1 + exp(−𝐼𝛽w⊤𝜙𝑖𝑗)). Selecting a Teacher To select the teacher to query, we first calculate the expected metric for each teacher 𝛽given the current belief distribution 𝑃(w)and then select the teacher that would result in the lowest expected metric score. Formally, the rationality parameter 𝛽*that leads to the largest reduction in the expectation of the metric is defined as follows: 𝛽*= argmin 𝛽∈𝛽⎡ ⎣E w∼𝑃w 𝐼∼𝑃𝐼|w[︀ ℳ(𝑃w|𝐼,w;𝜙𝑖𝑗, 𝛽)]︀⎤ ⎦,(8) where 𝛽is a vector of the 𝛽values of the teachers. Learning a Reward Model To learn the reward model, the learner begins with an initial belief distri- bution 𝑃wover the reward function parameterization and then updates it according to Algorithm 1. First, the algorithm generates queries of paired items and calcu- lates𝛽*, which is the rationality parameter that leads to the largest improvement in the expectation over the correctness metric. The algorithm queries the teacher with this rationality parameter, and the teacher responds with a preference indicating which of the two items in the query they prefer. This preference is used to update the Algorithm 1: LearnRewardModel (·) Input: An initial belief distribution 𝑃(w), a list of the teachers’ Boltzmann rationality parameters 𝛽, an expected metric function E[ℳ], and an entropy convergence threshold 𝜖 Output: A posterior belief distribution 𝑃(w) 1converged←False 2while not converged do 3 𝜑𝑖, 𝜑𝑗←GenerateQuery () 4 𝜙𝑖𝑗←𝜑𝑖−𝜑𝑗 5 𝛽*←argmin𝛽∈𝛽E[ℳ(𝑃(w),w;𝜙𝑖𝑗, 𝛽)] 6 𝐼←Teacher (𝛽*).Query (𝜑𝑖, 𝜑𝑗) 7 𝑃(w)←Normalize (𝑃(w)·𝑃(𝐼|w, 𝜙𝑖𝑗, 𝛽*)) 8 entropy←−∫︀ 𝑃(w) log𝑃(w)𝑑w 9 converged←entropy < 𝜖 10return 𝑃(w) belief distribution 𝑃w. The algorithm iterates until con- vergence, which is when the entropy of the distribution 𝑃wbecomes lower than a specified threshold 𝜖. 5. Theoretical Analysis In this section, we first prove that the belief distribution will converge to the true distribution and then show that, under certain conditions, querying a less rational teacher can result in more informative feedback. Convergence Algorithm 1 queries multiple teachers with different 𝛽values until the reward estimate conver- gences. Here, we show that this process will make the belief distribution over wconverge to the true value. Theorem 1. In the limit of 𝑁→∞ random queries to Boltzmann-rational teachers with positive, finite 𝛽values, the posterior distribution over wconverges to the true value. Proof. The likelihood of a sequence of human choices 𝐼∈[±1]𝑁from humans with rationality parameters 𝛽 is𝑃(𝐼|w;𝛽) =∏︀𝑁 𝑖=1𝑃(𝐼𝑖|w;𝛽𝑖). The posterior distri- bution over wafter a sequence of queries is 𝑃(w|𝐼;𝛽)∝𝑁∏︁ 𝑖𝑃(𝐼𝑖|w;𝛽𝑖)𝑃(w). We will show that 𝑃(w|𝐼;𝛽)→0as𝑁→∞ for all w̸=wtrue. The Bayes factor between wandwtrueis BF =𝑃(w|𝐼;𝛽) 𝑃(wtrue|𝐼;𝛽)=∏︀𝑁 𝑖𝑃(𝐼𝑖|w;𝛽𝑖)𝑃(w) ∏︀𝑁 𝑖𝑃(𝐼𝑖|wtrue;𝛽𝑖)𝑃(wtrue), where 𝑃(wtrue|𝐼;𝛽)is the posterior distribution at wtrue. We can show that BF→0as𝑁→∞ except whenw=wtrue. This implies 𝑃(w|𝐼;𝛽)→0except when w=wtrue. We require 𝑃(wtrue)̸= 0asBFis undefined otherwise. Trivially, BF = 1 whenw=wtrue. We now consider w̸=wtrue. We can define the nega- tive logarithm of BF, which approaches ∞asBF→0: −log (BF) =−log(︃∏︀𝑁 𝑖𝑃(𝐼𝑖|w;𝛽𝑖)𝑃(w) ∏︀𝑁 𝑖𝑃(𝐼𝑖|wtrue;𝛽𝑖)𝑃(wtrue))︃ =−𝑁∑︁ 𝑖log(︃ 𝑃(𝐼𝑖|w;𝛽𝑖) 𝑃(𝐼𝑖|wtrue;𝛽𝑖))︃ −log(︂𝑃(w) 𝑃(wtrue))︂ . The first term is the sum of many terms. If this term approaches∞as𝑁→∞ thenBF→0. We now exam- ine each term in the sum and show that in expectation they are each positive. All of these terms are independent as they are only depend on the likelihood and not on the current distribution. Hence, they will not decay with additional steps, and so the sum will diverge if the indi- vidual terms are positive in expectation. The expected value for each term in the sum is E[︃ −log(︃ 𝑃(𝐼𝑖|w;𝛽𝑖) 𝑃(𝐼𝑖|wtrue;𝛽𝑖))︃]︃ =−∑︁ 𝐼𝑖∈+1,−1𝑃(𝐼𝑖|wtrue;𝛽𝑖) log(︃ 𝑃(𝐼𝑖|w;𝛽𝑖) 𝑃(𝐼𝑖|wtrue;𝛽𝑖))︃ . This is the KL divergence between 𝑃(𝐼𝑖|wtrue;𝛽𝑖)and 𝑃(𝐼𝑖|w;𝛽𝑖). This is strictly non-negative and only equal to zero when 𝑃(𝐼𝑖|w;𝛽𝑖) = 𝑃(𝐼𝑖|wtrue;𝛽𝑖). When 𝛽= 0 , each of these terms equals 0. As 𝛽→ ∞ , 𝑃(𝐼𝑖|w;𝛽𝑖)→𝐻(𝐼w⊤𝜙), where 𝐻(·)is the Heaviside step function. In this case, it holds that 𝑃(𝐼𝑖|w;𝛽𝑖) = 𝑃(𝐼𝑖|wtrue;𝛽𝑖)whenever the values w⊤𝜙andw⊤ true𝜙 have the same sign. Therefore, for positive, finite 𝛽each of the terms in the sum is positive, so the sum diverges, and so the 𝑃(w|𝐼;𝛽)→0for allw̸=wtrue. Bigger 𝛽isn’t always more informative Querying a more rational teacher (with a larger 𝛽value) does not always lead to faster convergence to the true value, as measured by lower MSE or LL, because the magnitude of w⊤𝜙𝑖𝑗can be learned from the teacher making mistakes. We empirically observe this in Figure 2, where we demonstrate that if our current belief distribution 𝑃(w) is a normal distribution characterized by 𝜇and𝜎, a lower 𝛽value is more informative for certain values of 𝜇and𝜎. Specifically, when the distribution is symmetric ( 𝜇= 0) then a larger value of 𝛽is better, and as the distribution gets broader (larger 𝜎) larger 𝛽is also better. If the dis- tribution is very wide then a large 𝛽allows us to quickly Figure 2: For some prior beliefs over w, querying a teacher with a lower 𝛽parameter is more informative. The plots show the most informative 𝛽value (according to the mean squared error and log loss metrics, respectively) for a range of beliefs. Each belief is a Gaussian, parameterized by 𝜇(horizontal axis) and𝜎(vertical axis). The purple regions of the plots indicate beliefs where it is most informative to query a teacher with a 𝛽of approximately 1. remove a lot of probability mass, while if the distribution is narrow (and asymmetric) then we learn about the value ofw⊤𝜙𝑖𝑗from the humans making mistakes, which re- quires the human to be less than perfectly rational. For example, if w⊤𝜙𝑖𝑗>0then a perfectly rational human would always choose item 𝑖over item 𝑗, and we would not learn about the actual value ofw⊤𝜙𝑖𝑗. 6. Restaurant Recommendation We now discuss how our method for reward learning using feedback from multiple teachers can be applied to a simplified restaurant recommendation domain. In this domain, the goal is to learn a reward function that can be used to recommend restaurants to a user. This re- ward model must be learned from feedback from multiple teachers, in this case by asking which of two restaurants a human prefers. It is important to highlight that our approach is compatible with a variety of popular rec- ommendation tasks, including entertainment [ 35,36], news [37], and shopping [38] recommendations. More formally, the problem of restaurant recommen- dation has a set of restaurants 𝜌={𝜌1, 𝜌2, . . . , 𝜌 𝑛} that can be recommended to a user. Moreover, there is a set of users 𝑈={𝑈1, 𝑈2, . . . , 𝑈 𝑚}who can be queried about their restaurant preferences. Each restaurant is expressed as a set of features 𝐹= {Cleanliness ,Vegan ,Spiciness}where Cleanliness∈ [1,10] describes the cleanliness of the restaurant, Vegan∈ {0,1}describes whether the restaurant is vegan-friendly, and Spiciness∈[1,10]describes the spiciness of the food. The preference rating for each restaurant is denoted by w⊤𝜌𝑖, where w∈R3is a weight vector that parameterizes the reward model. Theaim is to learn the weights wusing feedback from multi- ple users to provide useful restaurant recommendations. We can represent the restaurant recommendation do- main using our approach. The set of items 𝜑1, 𝜑2, . . . , 𝜑 𝑛 is the set of restaurants 𝜌. The set of human users 𝑈is the set of human teachers. The users are modelled as Boltzmann-rational, and have known rationality parame- ters𝛽1, 𝛽2, . . . , 𝛽 𝑚. Beginning with an initial distribu- tion𝑃(w), we will use Algorithm 1 to converge to the weight values for the reward function that represents the user preferences. First, we select a pair of restaurants for a user to compare (in this case randomly selected) and ap- ply Equation 8 describing which user should be queried in order to achieve the lowest metric score in expectation after a single update. Next, this user is selected and then asked which of the two restaurants they prefer. Finally, using the selected user’s preference, the reward model weights are updated according to Equation 5 to generate a new belief distribution. The process is repeated until the belief distribution converges. 7. Experiments We now show that our approach method for selecting 𝛽outperforms several baseline methods, using the sim- ple restaurant recommendation domain. In Figure 3, we compare: (1) selecting the largest 𝛽value to see if the result that larger 𝛽is not always better is true in practice; (2) selecting 𝛽randomly to ensure that the advantage over selecting the largest 𝛽is not just due to the ran- domness of the selection; and (3) always selecting 𝛽= 1 because this is often what is assumed to be the rationality parameter in other work. In this experiment, the size of the weight vector is 𝑑= 3and the domain of the weights is 𝑊= [−10,10]3, which is discretized. The prior distribution of the weights is a uniform distribution over this domain 𝑃(w) = 𝒰(𝑊)and the true weight wtrue∈𝑊is sampled from this prior. There are 21 teachers, with 𝛽values uniformly spaced between 0 and 4. For 100 steps, two restaurant fea- ture vectors 𝜑={Cleanliness ,Vegan ,Spiciness}are generated randomly, where Cleanliness ,Spiciness∼ 𝒰(1,10), and Vegan are uniformly drawn from {0,1}. While we generate our samples randomly in order to iso- late the the effect of teacher selection, any of the active query selection methods from previous work could be used here. The teacher is selected and then queried using one of the various methods and the belief distribution is updated based on the preference of that teacher. The same 𝜑vectors are used for each method, so that the only difference between the methods is the selection of 𝛽. This procedure is repeated 100 times, each time sampling a new true weight vector wtrue. Overall, we observe that the active teacher selection Figure 3: Active teacher selection improves reward inference. These plots show the expected mean squared error and ex- pected log loss over the course of 100 iterations of reward infer- ence using various teacher selection methods. The solid line is the mean, and the shading is the standard deviation. Selecting teacher 𝛽w.r.t. mean square error most effectively minimizes mean square error, while selecting 𝛽w.r.t. log loss most ef- fectively minimizes log loss. In both cases, selecting teachers according to Equation 8 clearly outperforms the heuristic of always selecting the most rational teacher (largest 𝛽) and the baselines (random 𝛽and𝛽= 1). methods (MSE and LL) outperform the baseline methods. Moreover, we examine how the most informative value of𝛽changes with additional queries in Figure 4. As expected, the optimal 𝛽value decreases with additional queries, as the distribution gets less broad. At beginning of training, our approach queries the teachers with large 𝛽values because this enables it to determine the sign of w⊤𝜙𝑖𝑗, and then our approach queries the teachers with smaller 𝛽values to determine the magnitude of w⊤𝜙𝑖𝑗 as it gets more information. 8. Limitations and Future Work For the sake of conceptual clarity and mathematical for- malism, we have used relatively simple human decision- making and reward models. Future work should extend these results by increasing model complexity. For example, this analysis assumes that humans are Boltzmann-rational decision-makers with constant, known 𝛽values. While more nuanced than optimal mod- els, Boltzmann-rational models fail to account for system- atic biases in human judgement [ 28,39,40]. This work could be improved by using more complex, realistic mod- Figure 4: This plot shows the most informative values of 𝛽 during training, averaged across 100 runs (given the expected mean squared error and expected log loss respectively). The solid line is the mean and the shaded area is the standard deviation. 𝛽decreases over the course of training, as the learner’s belief distribution over wbecomes more confident. els of human decision-making, for example by allowing each human’s 𝛽parameter to vary across the state space to capture teacher specialization or by measuring and ex- plicitly modeling systematic cognitive biases. Moreover, this analysis assumes that the teacher 𝛽parameters are given, whereas in reality the agent may not have access to this information. Future work should also examine ways of modeling this part of human decision-making alongside learning the reward function. Finally, future work could extend these results to non- linear reward models, such as ensembles of neural net- works. Moreover, it could explore convergence proper- ties and optimal querying strategies for learning from teachers with different reward functions. For example, variations in individual taste might lead teachers to dis- agree on which restaurants are best. Future work should explore the ramifications of such inter-teacher variance on teacher selection and reward learning. 9. Conclusion In this work, we motivated, specified, and evaluated an algorithm for selecting which teacher to query during active reward learning with multiple teachers. Our algo- rithm models the teachers as Boltzmann-rational with known 𝛽parameters. At each time step, it queries the teacher that will be most informative in expectation. In- terestingly, we find that the most informative teacher is not always the most rational one. We prove and demon- strate that the reward learner’s belief will eventually collapse to the true reward function under our algorithm. Our hope is that this method and analysis will improve reward learning in domains where feedback is gathered from multiple teachers with varying levels of rationality. Acknowledgments We thank the anonymous reviewers for their valuable comments. This work was supported in part by a gift from the Open Philanthropy Foundation. References [1]D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al., Mastering the game of Go with deep neural net- works and tree search, Nature 529 (2016) 484–489. [2]D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hu- bert, L. Baker, M. Lai, A. Bolton, et al., Mastering the game of Go without human knowledge, Nature 550 (2017) 354–359. [3]C. Berner, G. Brockman, B. Chan, V. Cheung, P. Dębiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, et al., Dota 2 with large scale deep reinforcement learning, arXiv preprint arXiv:1912.06680 (2019). [4]A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural net- works, Communications of the ACM 60 (2017) 84– 90. [5]F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, X. Tang, Residual attention network for image classification, in: IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3156–3164. [6]V. Krakovna, Specification gaming examples in AI, 2018. [7]K. Lee, L. M. Smith, P. Abbeel, PEBBLE: Feedback- efficient interactive reinforcement learning via re- labeling experience and unsupervised pre-training, in: 38th International Conference on Machine Learning, PMLR, 2021, pp. 6152–6163. [8]J. Leike, D. Krueger, T. Everitt, M. Martic, V. Maini, S. Legg, Scalable agent alignment via reward modeling: A research direction, arXiv preprint arXiv:1811.07871 (2018). [9]H. J. Jeon, S. Milli, A. D. Dragan, Reward-rational (implicit) choice: A unifying formalism for reward learning, arXiv preprint arXiv:2002.04833 (2020). [10] A. Y. Ng, S. J. Russell, Algorithms for inverse rein- forcement learning, in: International Conference on Machine Learning, 2000, pp. 663–670. [11] P. Abbeel, A. Y. Ng, Apprenticeship learning via in- verse reinforcement learning, in: 21st International Conference on Machine Learning, 2004, p. 1. [12] B. D. Ziebart, Modeling purposeful adaptive behav-ior with the principle of maximum causal entropy, Ph.D. thesis, Carnegie Mellon University, 2010. [13] D. Sadigh, A. Dragan, S. Sastry, S. Seshia, Active preference-based learning of reward functions, in: Robotics: Science and Systems XIII, 2017, pp. 53–63. [14] P. F. Christiano, J. Leike, T. B. Brown, M. Martic, S. Legg, D. Amodei, Deep reinforcement learning from human preferences, Neural Information Pro- cessing Systems (2017) 4300–4308. [15] P. Goyal, S. Niekum, R. J. Mooney, Using natu- ral language for reward shaping in reinforcement learning, arXiv preprint arXiv:1903.02020 (2019). [16] D. Arumugam, J. K. Lee, S. Saskin, M. L. Littman, Deep reinforcement learning from policy-dependent human feedback, arXiv preprint arXiv:1902.04257 (2019). [17] A. Bajcsy, D. P. Losey, M. K. O’Malley, A. D. Dragan, Learning robot objectives from physical human in- teraction, Machine Learning Research 78 (2017) 217–226. [18] D. Hadfield-Menell, S. Milli, P. Abbeel, S. J. Russell, A. Dragan, Inverse reward design, in: Neural Infor- mation Processing Systems, 2017, pp. 6765–6774. [19] S. Mindermann, R. Shah, A. Gleave, D. Hadfield- Menell, Active inverse reward design, arXiv preprint arXiv:1809.03060 (2018). [20] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, P. F. Christiano, Learning to summarize with human feedback, Neu- ral Information Processing Systems 33 (2020) 3008– 3021. [21] D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, G. Irving, Fine-tuning language models from human prefer- ences, arXiv preprint arXiv:1909.08593 (2019). [22] J. Leike, J. Schulman, J. Wu, Our approach to align- ment research, 2022. URL: https://openai.com/blog/ our-approach-to-alignment-research/. [23] J. Skalse, A. Abate, Misspecification in in- verse reinforcement learning, arXiv preprint arXiv:2212.03201 (2022). [24] S. Milli, A. D. Dragan, Literal or pedagogic hu- man? Analyzing human model misspecification in objective learning, in: Uncertainty in Artificial Intelligence, 2020, pp. 925–934. [25] R. Freedman, R. Shah, A. Dragan, Choice set mis- specification in reward inference, arXiv preprint arXiv:2101.07691 (2021). [26] O. Daniels-Koch, R. Freedman, The expertise prob- lem: Learning from specialized feedback, arXiv preprint arXiv:2211.06519 (2022). [27] E. Bıyık, D. Sadigh, Batch active preference- based learning of reward functions, arXiv preprint arXiv:1810.04303 (2018). [28] O. Evans, A. Stuhlmüller, N. D. Goodman, Learning the preferences of ignorant, inconsistent agents, in: 30th AAAI Conference on Artificial Intelligence, 2016, pp. 323–329. [29] E. Bıyık, M. Palan, N. C. Landolfi, D. P. Losey, D. Sadigh, Asking easy questions: A user-friendly approach to active reward learning, in: Conference on Robot Learning, 2020, pp. 1177–1190. [30] R. A. Bradley, M. E. Terry, Rank analysis of in- complete block designs: I. The method of paired comparisons, Biometrika 39 (1952) 324–345. [31] X. Liang, K. Shu, K. Lee, P. Abbeel, Reward uncer- tainty for exploration in preference-based reinforce- ment learning, arXiv preprint arXiv:2205.12401 (2022). [32] D. Ramachandran, E. Amir, Bayesian Inverse Rein- forcement Learning., in: International Joint Con- ference on Artificial Intelligence, volume 7, 2007, pp. 2586–2591. [33] M. Palan, G. Shevchuk, N. Charles Landolfi, D. Sadigh, Learning reward functions by integrat- ing human demonstrations and preferences, in: Robotics: Science and Systems XV, 2019, pp. 23–33. [34] R. Freedman, J. S. Borg, W. Sinnott-Armstrong, J. P. Dickerson, V. Conitzer, Adapting a kidney exchange algorithm to align with human values, Artificial Intelligence 283 (2020) 103261. [35] C. A. Gomez-Uribe, N. Hunt, The netflix recom- mender system: Algorithms, business value, and innovation, ACM Transactions on Management Information Systems (TMIS) 6 (2015) 1–19. [36] M. Perano, G. L. Casali, Y. Liu, T. Abbate, Profes- sional reviews as service: A mix method approach to assess the value of recommender systems in the entertainment industry, Technological Forecasting and Social Change 169 (2021) 120800. [37] S. Raza, C. Ding, News recommender system: A review of recent progress, challenges, and opportu- nities, Artificial Intelligence Review (2021) 1–52. [38] P. M. Alamdari, N. J. Navimipour, M. Hosseinzadeh, A. A. Safaei, A. Darwesh, A systematic study on the recommender systems in the E-commerce, IEEE Access 8 (2020) 115694–115716. [39] R. Shah, N. Gundotra, P. Abbeel, A. Dragan, On the feasibility of learning, rather than assuming, human biases for reward inference, in: 36th International Conference on Machine Learning, PMLR, 2019, pp. 5670–5679. [40] L. Chan, A. Critch, A. Dragan, Human irrationality: Both bad and good for reward inference, arXiv preprint arXiv:2111.06956 (2021).
fb0deb6e-fd13-40b8-966f-7b2c36837505
trentmkelly/LessWrong-43k
LessWrong
Meetup : Australia-wide Online Hangout - August Discussion article for the meetup : Australia-wide Online Hangout - August WHEN: 09 August 2015 07:30:00PM (+1000) WHERE: Australia See you at the online hangout. From wherever you are. Link to be posted about 10 minutes before hand because they expire otherwise. We use google hangouts so make sure you can get into one of those before the meetup or else there is a whole bunch of fluffing around installing things. bring any fickle puzzles or questions to the floor. or neat group-projects. Usual representation includes; Sydney, Melbourne, Canberra, Brisbane, NZ, This one guy from South America... https://www.facebook.com/events/484946161670032/ time 19:30 - 22:00. UTC+10 (Sunday evening) Discussion article for the meetup : Australia-wide Online Hangout - August
ce99d0e1-0036-4582-80cc-cfe542ea9c62
trentmkelly/LessWrong-43k
LessWrong
Reminder: Weekly LW meetings in NYC Hey everyone. This is just a reminder that there are weekly Less Wrong/Overcoming Bias meetups in New York City. Meetups are usually held on Tuesday at 7 PM, in lower Manhattan (south of Central Park). If you're interested, more information is available if you sign up for the overcomingbiasnyc Google Group. If you're in the area, hope to see you there!
6f4abac5-785a-45da-8dfe-d2cbc1ec4140
trentmkelly/LessWrong-43k
LessWrong
Chapter 90: Roles, Pt 1 A simple Innervate from the Headmaster had awakened Fred Weasley, followed by a preliminary healing Charm for a broken arm and cracked ribs. Harry's voice had distantly told the Headmaster about the Transfigured acid inside the troll's head (Dumbledore had looked down over the side of the terrace and made a gesture before returning) and then about the Weasley twins' minds having been tampered with, carrying on a separate conversation that Harry remembered but could not process. Harry still stood over Hermione's body, he hadn't moved from that spot, thinking as fast as he could through the sense of dissociation and fragmented time, was there anything he should be doing now, any opportunities that were passing irrevocably. Some way to reduce the amount of magical omnipotence that would be required later. A temporal beacon effect to mark this instant for later time travel, if he someday found a way to travel back further than six hours. There were theories of time travel under General Relativity (which had seemed much less plausible before Harry had run across Time-Turners) and those theories said you couldn't go back to before the time machine was built - a relativistic time machine maintained a continuous pathway through time, it didn't teleport anything. But Harry didn't see anything helpful he could do using spells in his lexicon, Dumbledore wasn't being very cooperative, and in any case this was several minutes after the critical location within Time "Harry," the Headmaster whispered, laying his hand on Harry's shoulder. He had vanished from where he was standing over the Weasley twins and come into existence beside Harry; George Weasley had discontinously teleported from where he was sitting to be kneeling next to his brother's side, and Fred was now lying straight with his eyes open and wincing as he breathed. "Harry, you must go from this place." "Hold on," said Harry's voice. "I'm trying to think if there's anything else I can do." The old wizard's voice s
f4f5d500-8753-4668-b658-d69c1cb6114a
StampyAI/alignment-research-dataset/blogs
Blogs
OpenAI and other news ![open-ai[1]](http://intelligence.org/wp-content/uploads/2015/12/open-ai1.png)We’re only 11 days into December, and this month is shaping up to be a momentous one. On December 3, the University of Cambridge partnered with the University of Oxford, Imperial College London, and UC Berkeley to launch the **[Leverhulme Centre for the Future of Intelligence](http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of)**. The Cambridge Centre for the Study of Existential Risk (CSER) helped secure initial funding for the new independent center, in the form of a $15M grant to be disbursed over ten years. CSER and Leverhulme CFI plan to collaborate closely, with the latter focusing on AI’s mid- and long-term social impact. Meanwhile, the **Strategic Artificial Intelligence Research Centre** (SAIRC) is hiring its first research fellows in machine learning, policy analysis, and strategy research: [details](http://www.fhi.ox.ac.uk/vacancies/). SAIRC will function as an extension of two existing institutions: CSER, and the Oxford-based Future of Humanity Institute. [As Luke Muehlhauser has noted](http://lukemuehlhauser.com/if-youre-an-ai-safety-lurker-now-would-be-a-good-time-to-de-lurk/), if you’re an AI safety “lurker,” now is an ideal time to de-lurk and get in touch. MIRI’s research program is also growing quickly, with mathematician Scott Garrabrant joining our core team tomorrow. Our [winter fundraiser](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) is in full swing, and multiple [matching](https://dansmithholla.wordpress.com/2015/12/08/december-charity-drive/) opportunities have sprung up to bring us within a stone’s throw of our first funding target. The biggest news, however, is the launch of **OpenAI**, a new $1 billion research nonprofit staffed with top-notch machine learning experts and co-chaired by Sam Altman and Elon Musk. The OpenAI team [describes their mission](https://openai.com/blog/introducing-openai/): > Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely. > > I’ve been in conversations with Sam Altman and Greg Brockman at OpenAI as their team has come together. They’ve expressed a keen interest in making sure that AI has a positive impact, and we’re looking forward to future collaborations between our teams. ~~I’m excited to see OpenAI joining the space, and I’m optimistic that their entrance will result in promising new AI alignment research in addition to AI capabilities research.~~ 2015 has truly been [an astounding year](https://intelligence.org/2015/07/16/an-astounding-year/) — and I’m eager to see what 2016 holds in store. --- **Nov. 2021 update**: The struck sentence in this post is potentially misleading as a description of my epistemic state at the time, in two respects: 1. My feelings about OpenAI at the time were, IIRC, some cautious optimism plus a bunch of pessimism. My sentence was written only from the optimism, in a way that was misleading about my overall state. 2. The sentence here is unintentionally ambiguous: I intended to communicate something like “OpenAI is mainly a capabilities org, but I’m hopeful that they’ll do a good amount of alignment research too”, but I accidentally left open the false interpretation “I’m hopeful that OpenAI will do a bunch of alignment research, and I’m hopeful that OpenAI will do a bunch of capabilities research too”. The post [OpenAI and other news](https://intelligence.org/2015/12/11/openai-and-other-news/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
db24c366-5f6e-4032-9861-b2f6f52192a1
trentmkelly/LessWrong-43k
LessWrong
Rationalists should beware rationalism Rationalism is most often characterized as an epistemological position. On this view, to be a rationalist requires at least one of the following: (1) a privileging of reason and intuition over sensation and experience, (2) regarding all or most ideas as innate rather than adventitious, (3) an emphasis on certain rather than merely probable knowledge as the goal of enquiry. -- The Stanford Encyclopedia of Philosophy on Continental Rationalism. By now, there are some things which most Less Wrong readers will agree on. One of them is that beliefs must be fueled by evidence gathered from the environment. A belief must correlate with reality, and an important part of that is whether or not it can be tested - if a belief produces no anticipation of experience, it is nearly worthless. We can never try to confirm a theory, only test it. But yet, we seem to have no problem coming up with theories that are either untestable or that we have no intention of testing, such as evolutionary psychological explanations for the underdog effect. I'm being a bit unfair here. Those posts were well thought out and reasonably argued, and Roko's post actually made testable predictions. Yvain even made a good try at solving the puzzle, and when he couldn't, he reasonably concluded that he was stumped and asked for help. That sounds like a proper use of humility to me. But the way that ev-psych explanations get rapidly manufactured and carelessly flung around on OB and LW has always been a bit of a pet peeve for me, as that's exactly how bad evpsych gets done. The best evolutionary psychology takes biological and evolutionary facts, applies those to humans and then makes testable predictions, which it goes on to verify. It doesn't take existing behaviors and then try to come up with some nice-sounding rationalization for them, blind to whether or not the rationalization can be tested. Not every behavior needs to have an evolutionary explanation - it could have evolved via genetic drift, o
e3f92173-61cd-4dcb-b03d-82524c1250fd
trentmkelly/LessWrong-43k
LessWrong
Video and Transcript of Presentation on Existential Risk from Power-Seeking AI In March 2022, I gave a presentation about existential risk from power-seeking AI, as part a lecture series hosted by Harvard Effective Altruism. The presentation summarized my report on the topic. With permission from the organizers, I'm posting the video here, along with the transcript (lightly edited for clarity/concision) and the slides.  Main Talk Thanks for having me, nice to be here, and thanks to everyone for coming. I'm Joe Carlsmith, I work at Open Philanthropy, and I'm going to be talking about the basic case, as I see it for, for getting worried about existential risk from artificial intelligence, where existential risk just refers to a risk of an event that could destroy the entire future and all of the potential for what the human species might do. Plan I'm going to discuss that basic case in two stages. First, I'm going to talk about what I see as the high-level backdrop picture that informs the more detailed arguments about this topic, and which structures and gives intuition for why one might get worried. And then I'm going to go into a more precise and detailed presentation of the argument as I see it -- and one that hopefully makes it easier to really pin down which claims are doing what work, where might I disagree and where could we make more progress in understanding this issue. And then we'll do some Q&A at the end. I understand people in the audience might have different levels of exposure and understanding of these issues already. I'm going to be trying to go at a fairly from-scratch level.  And I should say: basically everything I'm going to be saying here is also in a report I wrote last year, in one form or another. I think that's linked on the lecture series website, and it's also on my website, josephcarlsmith.com. So if there's stuff we don't get to, or stuff you want to learn more about, I'd encourage you to check out that report, which has a lot more detail. And in general, there's going to be a decent amount of material here
676e4584-db63-4fd4-92bb-36d48b88c776
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Proofs Section 1.1 (Initial results to LF-duality) .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Fair upfront warning: This is not a particularly readable proof section (though much better than Section 2 about belief functions). There's dense notation, logical leaps due to illusion of transparency since I've spent a month getting fluent with these concepts, and a relative lack of editing since it's long. If you really want to read this, I'd suggest PM-ing me to get a link to MIRIxDiscord, where I'd be able to guide you through it and answer questions.   **Proposition 1:** *If*f∈C(X,[0,1])*then*f+:(m,b)↦m(f)+b*is a positive functional on*Msa(X)*.* Proof Sketch: We just check three conditions. Linearity, being nonnegative on Msa(X), and continuity. Linearity proof. Using a,a′ for constants, f+(a(m,b)+a′(m′,b′))=f+(am+a′m′,ab+ab′)=(am+a′m′)(f)+ab+a′b′ =a(m(f)+b)+a′(m′(f)+b′)=af+(m,b)+a′f+(m′,b′) So we have verified that f+(aM+a′M′)=af+(M)+a′f+(M′) and we have linearity. Positivity proof: An sa-measure M, writeable as (m,b) has m uniquely writeable as a pair of finite measures m+ (all the positive regions) and a m− (all the negative regions) by the [Jordan Decomposition Theorem](https://en.wikipedia.org/wiki/Hahn_decomposition_theorem), and b+m−(1)≥0. So, f+(M)=m(f)+b=m+(f)+m−(f)+b≥0+m−(1)+b≥0 The first ≥ by 1≥f≥0, so the expectation of f is positive and m− is negative so taking the expectation of 1 is more negative. The second ≥ is by the condition on how m− relates to b. Continuity proof: Fix a sequence (mn,bn) converging to (m,b). Obviously the b part converges, so now we just need to show that mn(f) converges to m(f). The metric we have on the space of finite signed measures is the KR-metric, which implies the thing we want. This only works for continuous f, not general f.   **Theorem 1:** *Every positive functional on*Msa(X)*can be written as*(m,b)↦c(m(f)+b)*, where*c≥0*, and*f∈C(X,[0,1]) Proof Sketch: The first part is showing that it's impossible to have a positive functional where the b term doesn't matter, without the positive functional being the one that maps everything to 0. The second part of the proof is recovering our f by applying the positive functional to Dirac-delta measures δx, to see what the function must be on point x. Part 1:  Let's say f+ isn't 0, ie there's some nonzero (m,b) pair where f+(m,b)>0, and yet f+(0,1)=0 (which, by linearity, means that f+(0,b)=0 for all b). We'll show that this situation is impossible. Then, 0<f+(m,b)=f+(m+,0)+f+(m−,b) by our starting assumption, and Jordan decomposition of m, along with linearity of positive functionals. Now, f+(m−,b)+f+(−2(m−),0)=f+(−(m−),b) because positive functionals are linear, and everything in that above equation is an sa-measure (flipping a negative measure makes a positive measure, which doesn't impose restrictions on the b term except that it be ≥0).  And so, by nonnegativity of positive functionals on sa-measures, f+(m−,b)≤f+(−(m−),b). Using this, we get f+(m+,0)+f+(m−,b)≤f+(m+,0)+f+(−(m−),b) =f+(m+,0)+f+(−(m−),0)+f+(0,b)=f+(m+,0)+f+(−(m−),0) Another use of linearity was invoked for the first = in the second line, and then the second = made use of our assumption that f+(0,b)=0 for all b. At this point, we have derived that 0<f+(m+,0)+f+(−(m−),0). Both of these are positive measures. So, there exists some positive measure m′ where f+(m′,0)>0. Now, observe that, for all b, 0=f+(0,b)=f+(m′,0)+f+(−(m′),b) Let b be sufficiently huge to make (−(m′),b) into an sa-measure. Also, since f+(m′,0)>0, f+(−(m′),b)<0, which is impossible because positive functionals are nonnegative on all sa-measures. Contradiction. Due to the contradiction, if there's a nonzero positive functional, it must assign f+(0,1)>0, so let f+(0,1) be our c term. Proof part 2: Let's try to extract our f. Let f(x):=f+(δx,0)f+(0,1) This is just recovering the value of the hypothesized f on x by feeding our positive functional the measure δx that assigns 1 value to x and nothing else, and scaling. Now, we just have to verify that this f is continuous and in [0,1]. For continuity, let xn limit to x. By the KR-metric we're using, (δxn,0) limits to (δx,0). By continuity of f+, f+(δxn,0) limits to f+(δx,0). Therefore, f(xn) limits to f(x) and we have continuity. For a lower bound, f≥0, because f(x) is a ratio of two nonnegative numbers, and the denominator isn't 0. Now we just have to show that f≤1. For contradiction, assume there's an x where f(x)>1. Then f+(δx,0)f+(0,1)>1, so f+(δx,0)>f+(0,1), and in particular, f+(0,1)−f+(δx,0)<0. But then, f+(−(δx),1)+f+(δx,0)=f+(0,1), so f+(−(δx),1)=f+(0,1)−f+(δx,0)<0 However, (−(δx),1) is an sa-measure, because δx(1)+1=0, and must have nonnegative value, so we get a contradiction. Therefore, f∈C(X,[0,1]). To wrap up, we can go: f+(m,b)=f+(m,0)+f+(0,b)=f+(0,1)f+(0,1)(∫X(f+(δx,0))dm+f+(0,b)) =f+(0,1)(∫Xf+(δx,0)f+(0,1)dm+f+(0,b)f+(0,1))=c(∫Xf(x)dm+b)=c(m(f)+b) And c≥0, and f∈C(X,[0,1]), so we're done.   **Lemma 1: Compactness Lemma:** *Fixing some nonnegative constants*λ◯*and*b◯*, the set of sa-measures where*m+(1)∈[0,λ◯]*,*b∈[0,b◯]*, is compact. Further, if a set lacks an upper bound on*m+(1)*or on*b*, it's not compact.* Proof Sketch: We fix an arbitrary sequence of sa-measures, and then use the fact that closed intervals are compact-complete and the space ΔX is compact-complete to isolate a suitable convergent subsequence. Since all sequences have a limit point, the set is compact. Then, we go in the other direction, and get a sequence with no limit points assuming either a lack of upper bounds on m+(1), or a lack of upper bounds on b. Proof: Fix some arbitrary sequence Mn wandering about within this space, which breaks down into (m+n,0)+(m−n,bn), and then, since all measures are just a probability distribution scaled by the constant m(1), it further breaks down into (m+n(1)⋅μn,0)+(m−n(1)⋅μ′n,bn). Since bn+m−n(1)≥0, m−n(1) must be bounded in [−b◯,0]. Now, what we can do is extract a subseqence where bn ,m+n(1), m−n(1), μn, and μ′n all converge, by Tychonoff's Theorem (finite product, no axiom of choice required) Our three number sequences are all confined to a bounded interval, and our two probability sequences are wandering around within ΔX which is a compact complete metric space if X is. The limit of this subsequence is a limit point of the original sequence, since all its components are arbitrarily close to the components that make up Mn for large enough n in our subsequence. The limiting value of m+(1) and b both obey their respective bounds, and the cone of sa-measures is closed, so the limit point is an sa-measure and respects the bounds too. Therefore the set is compact, because all sequences of points in it have a limit point. In the other direction, assume a set B has unbounded b values. Then we can fix a sequence (mn,bn)∈B where bn increases without bound, so the a-measures can't converge. The same applies to all subsequences, so there's no limit point, so B isn't compact. Now, assume a set B has bounded b values, call the least upper bound b⊙, but the value of m+(1) is unbounded. Fix a sequence (mn,bn)∈B where m+n(1) is unbounded above. Assume a convergent subsequence exists. Since bn+m−n(1)≥0, m−n(1) must be bounded in [−b⊙,0]. Then because mn(1)=m+n(1)+m−n(1)≥m+n(1)−b⊙, and the latter quantity is finite, mn(1) must be unbounded above. However, in order for the mn to limit to some m, limn→∞mn(1)=m(1), which results in a contradiction. Therefore, said convergent subsequence doesn't exist, and B is not compact. Put together, we have a necessary-and-sufficient condition for a closed subset of Msa(X) to be compact. There must be an upper bound on b and m+(1), respectively.   **Lemma 2:** *The upper completion of a closed set of sa-measures is closed.* Proof sketch: We'll take a convergent sequence (mn,bn) in the upper completion of B that limits to (m,b), and show that, in order for it to converge, the same sorts of bounds as the Compactness Lemma uses must apply. Then, breaking down (mn,bn) into (mBn,bBn)+(m∗n,b∗n), where (mBn,bBn)∈B, and (m∗n,b∗n) is an sa-measure, we'll transfer these Compactness-Lemma-enabling bounds to the sequences (mBn,bBn) and (m∗n,b∗n), to get that they're both wandering around in a compact set. Then, we just take a convergent subsequence of both, add the two limit points together, and get our limit point (m,b), witnessing that it's in the upper completion of B. Proof: Let (mn,bn)∈B+Msa(X) limit to some (m,b). A convergent sequence (plus its one limit point) is a compact set of points, so, by the Compactness Lemma, there must be a b◯ and λ◯ that are upper bounds on the bn and m+n(1) values, respectively. Now, for all n, break down (mn,bn) as (mBn,bBn)+(m∗n,b∗n), where (mBn,bBn)∈B, and (m∗n,b∗n) is an sa-measure. Because bBn+b∗n=bn≤b◯, we can bound the bBn and b∗n quantities by b◯. This transfers into a −b◯ lower bound on mB−n(1) and m∗−n(1), respectively. Now, we can go: mB+n(1)+mB−n(1)+m∗+n(1)+m∗−n(1)=mBn(1)+m∗n(1)=mn(1) =m+n(1)+m−n(1)≤m+n(1)≤λ◯ Using worst-case values for mB−n(1) and m∗−n(1), we get: mB+n(1)+m∗+n(1)−2b◯≤λ◯ mB+n(1)+m∗+n(1)≤λ◯+2b◯ So, we have upper bounds on mB+n(1) and m∗+n(1) of λ◯+2b◯, respectively. Due to the sequences (mBn,bBn) and (m∗n,b∗n) respecting bounds on b and m+(1) (b◯ and λ◯+2b◯ respectively), and wandering around within the closed sets B and Msa(X) respectively, we can use the Compactness Lemma and Tychonoff's theorem (finite product, no axiom of choice needed) to go "hey, there's a subsequence where both (mBn,bBn) and (m∗n,b∗n) converge, call the limit points (mB,bB) and (m∗,b∗). Since B and Msa(X) are closed, (mB,bB)∈B, and (m∗,b∗)∈Msa(X)." Now, does (mB,bB)+(m∗,b∗)=(m,b)? Well, for any ϵ, there's some really large n where d((mBn,bBn),(mB,bB))<ϵ, d((m∗n,b∗n),(m∗,b∗))<ϵ, and d((mn,bn),(m,b))<ϵ. Then, we can go: d((m,b),(mB,bB)+(m∗,b∗))≤d((m,b),(mn,bn))+d((mn,bn),(mB,bB)+(m∗,b∗)) =d((m,b),(mn,bn))+d((mBn,bBn)+(m∗n,b∗n),(mB,bB)+(m∗,b∗)) =d((m,b),(mn,bn))+||((mBn,bBn)+(m∗n,b∗n))−((mB,bB)+(m∗,b∗))|| =d((m,b),(mn,bn))+||((mBn,bBn)−(mB,bB))+((m∗n,b∗n)−(m∗,b∗))|| ≤d((m,b),(mn,bn))+||(mBn,bBn)−(mB,bB)||+||(m∗n,b∗n)−(m∗,b∗)|| =d((m,b),(mn,bn))+d((mBn,bBn),(mB,bB))+d((m∗n,b∗n),(m∗,b∗))<3ϵ So, regardless of ϵ, d((m,b),(mB,bB)+(m∗,b∗))<3ϵ, so (mB,bB)+(m∗,b∗)=(m,b). So, we've written (m,b) as a sum of an sa-measure in B and an sa-measure, certifying that (m,b)∈B+Msa(X), so B+Msa(X) is closed.   **Proposition 2:** *For closed convex nonempty* B,B+Msa(X)={M|∀f+∃M′∈B:f+(M)≥f+(M′)} Proof sketch: Show both subset inclusion directions. One is very easy, then we assume the second direction is false, and invoke the [Hahn-Banach theorem](https://en.wikipedia.org/wiki/Hahn%E2%80%93Banach_theorem) to separate a point in the latter set from the former set. Then we show that the separating functional is a positive functional, so we have a positive functional where the additional point underperforms everything in B+Msa(X), which is impossible by the definition of the latter set. Easy direction: We will show that B+Msa(X)⊆{M|∀f+∃M′∈B:f+(M)≥f+(M′)} This is because a M∈(B+Msa(X)), can be written as M=MB+M∗. Let MB be our M′ of interest. Then, it is indeed true that for all f+, f+(M)=f+(MB)+f+(M∗)≥f+(MB) Hard direction: Assume by contradiction that B+Msa(X)⊂{M|∀f+∃M′∈B:f+(M)≥f+(M′)} Then there's some M where ∀f+∃M′∈B:f+(M)≥f+(M′) and M∉B+Msa(X). B+Msa(X) is the upper completion of a closed set, so by the Compactness Lemma, it's closed, and since it's the Minkowski sum of convex sets, it's convex. Now, we can use the variant of the Hahn-Banach theorem from the [Wikipedia article](https://en.wikipedia.org/wiki/Hahn%E2%80%93Banach_theorem) on "Hahn-Banach theorem", in the "separation of a closed and compact set" section. Our single point M is compact, convex, nonempty, and disjoint from the closed convex set B+Msa(X). Banach spaces are locally convex, so we can invoke Hahn-Banach separation. Therefore, there's some continuous linear functional ϕ s.t. ϕ(M)<infM′∈(B+Msa(X))ϕ(M′) We will show that this linear functional is actually a positive functional! Assume there's some sa-measure M∗ where ϕ(M∗)<0. Then we can pick a random MB∈B, and consider ϕ(MB+cM∗), where c is extremely large. MB+cM∗ lies in B+Msa(X), but it would also produce an extremely negative value for \phi which undershoots ϕ(M) which is impossible. So ϕ is a positive functional. However, ϕ(M)<infM′∈(B+Msa(X))ϕ(M′), so ϕ(M)<infM′∈Bϕ(M′). But also, M fulfills the condition ∀f+∃M′∈B:f+(M)≥f+(M′), because of the set it came from. So, there must exist some M′∈B where ϕ(M)≥ϕ(M′). But, we have a contradiction, because ϕ(M)<infM′∈Bϕ(M′). So, there cannot be any point in {M|∀f+∃M′∈B:f+(M)≥f+(M′)} that isn't in B+Msa(X). This establishes equality.   **Lemma 3:** *For any closed set*B⊆Msa(X)*and point*M∈B*, the set*({M}−Msa(X))∩B*is nonempty and compact.* Proof: It's easy to verify nonemptiness, because M is in the set. Also, it's closed because it's the intersection of two closed sets. B was assumed closed, and the other part is the Minkowski sum of {M} and −Msa(X), which is closed if −Msa(X) is, because it's just a shift of −Msa(X) (via a single point). −Msa(X) is closed because it's -1 times a closed set. We will establish a bound on the m+(1) and b values of anything in the set, which lets us invoke the Compactness Lemma to show compactness, because it's a closed subset of a compact set. Note that if M′∈({M}−Msa(X))∩B, then M′=M−M∗, so M′+M∗=M. Rewrite this as (m′,b′)+(m∗,b∗)=(m,b) Because b′+b∗=b, we can bound b′ and b∗ by b. This transfers into a −b lower bound on m′−(1) and m∗−(1). Now, we can go: m′+(1)+m′−(1)+m∗+(1)+m∗−(1)=m′(1)+m∗(1)=m(1) =m+(1)+m−(1)≤m+(1) Using worst-case values for m′−(1) and m∗−(1), we get: m′+(1)+m′+(1)−2b≤m+(1) m′+(1)≤m′+(1)+m∗+(1)≤m+(1)+2b So, we have an upper bound of m+(1)+2b on m′+(1), and an upper bound of b on b′. Further, (m′,b′) was arbitrary in ({M}−Msa(X))∩B, so we have our bounds. This lets us invoke the Compactness Lemma, and conclude that said closed set is compact.   **Lemma 4:** *If*≥*is a partial order on*B*where*M′≥M*iff there's some sa-measure*M∗*where*M=M′+M∗*, then* ∃M′>M↔(M∈B∧∃M′≠M:M′∈{M}−Msa(X))∩B)↔M is not minimal in B Proof: ∃M′>M↔∃M′≠M:M′≥M Also, M′≥M↔(M′,M∈B∧∃M∗:M=M′+M∗) Also, ∃M∗:M=M′+M∗↔∃M∗:M−M∗=M′↔M′∈({M}−Msa(X)) Putting all this together, we get (∃M′>M)↔(M∈B∧∃M′≠M:M′∈({M}−Msa(X))∩B) And we're halfway there. Now for the second half. M is not minimal in B↔M∈B∧(∃M′∈B:M′≠M∧(∃M∗:M=M′+M∗)) Also, ∃M∗:M=M′+M∗↔∃M∗:M−M∗=M′↔M′∈({M}−Msa(X)) Putting this together, we get M is not minimal in B↔(M∈B∧∃M′≠M:M′∈({M}−Msa(X))∩B) And the result has been proved.   **Theorem 2:** *Given a nonempty closed set*B*, the set of minimal points*Bmin*is nonempty and all points in*B*are above a minimal point.* Proof sketch: First, we establish an partial order that's closely tied to the ordering on B, but flipped around, so minimal points in B are maximal elements. We show that it is indeed a partial order, letting us leverage Lemma 4 to translate between the partial order and the set B. Then, we show that every chain in the partial order has an upper bound via Lemma 3 and compactness arguments, letting us invoke Zorn's lemma to show that that everything in the partial order is below a maximal element. Then, we just do one last translation to show that minimal points in B perfectly correspond to maximal elements in our partial order. Proof: first, impose a partial order on B, where M′≥M iff there's some sa-measure M∗ where M=M′+M∗. Notice that this flips the order. If an sa-measure is "below" another sa-measure in the sa-measure addition sense, it's above that sa-measure in this ordering. So a minimal point in B would be maximal in the partial order. We will show that it's indeed a partial order. Reflexivity is immediate. M=M+(0,0), so M≥M. For transitivity, assume M′′≥M′≥M. Then there's some M∗ and M′∗ s.t. M=M′+M∗, and M′=M′′+M′∗. Putting these together, we get M=M′′+(M∗+M′∗), and adding sa-measures gets you an sa-measure, so M′′≥M. For antisymmetry, assume M′≥M and M≥M′. Then M=M′+M∗, and M′=M+M′∗. By substitution, M=M+(M∗+M′∗), so M′∗=−M∗. For all positive functionals, f+(M′∗)=f+(−M∗)=−f+(M∗), and since positive functionals are always nonnegative on sa-measures, the only way this can happen is if M∗ and M′∗ are 0, showing that M=M′. Anyways, since we've shown that it's a partial order, all we now have to do is show that every chain has an upper bound in order to invoke Zorn's lemma to show that every point in B lies below some maximal element. Fix some ordinal-indexed chain Mγ, and associate each of them with the set Sγ=({Mγ}+(−Msa(X)))∩B, which is compact by Lemma 3 and always contains Mγ. The collection of Sγ also has the finite intersection property, because, fixing finitely many of them, we can consider a maximal γ∗, and Mγ∗ is in every associated set by: Case 1: Some other Mγ equals Mγ∗, so Sγ=Sγ∗ and Mγ∗∈Sγ∗=Sγ. Case 2: Mγ∗>Mγ, and by Lemma 4, Mγ∗∈({Mγ}−Msa(X))∩B. Anyways, since all the Sγ are compact, and have the finite intersection property, we can intersect them all and get a nonempty set containing some point M∞. M∞ lies in B, because all the sets we intersected were subsets of B. Also, because M∞∈(Mγ−Msa(X))∩B for all γ in our chain, then if M∞≠Mγ, Lemma 4 lets us get M∞>Mγ, and if M∞=Mγ, then M∞≥Mγ. Thus, M∞ is an upper bound for our chain. By Zorn's Lemma, because every chain has an upper bound, there are maximal elements in B, and every point in B has a maximal element above it. To finish up, use Lemma 4 to get: M is maximal↔¬∃M′>M↔M is minimal in B   **Proposition 3:** Given a f∈C(X,[0,1]), and a B that is nonempty closed, inf(m,b)∈B(m(f)+b)=inf(m,b)∈Bmin(m(f)+b) Direction 1: since Bmin is a subset of B, we get one direction easily, that inf(m,b)∈B(m(f)+b)≤inf(m,b)∈Bmin(m(f)+b) Direction 2: Take a M∈B. By Theorem 2, there is a Mmin∈Bmin s.t. M=Mmin+M∗. Applying our positive functional m(f)+b (by Proposition 1), we get that m(f)+b≥mmin(f)+bmin. Because every point in B has a point in Bmin which scores as low or lower according to the positive functional, inf(m,b)∈B(m(f)+b)≥inf(m,b)∈Bmin(m(f)+b) And this gives us our desired equality.   **Proposition 4:** *Given a nonempty closed convex*B*,*Bmin=(Buc)min*and*(Bmin)uc=Buc Proof: First, we'll show Bmin=(Buc)min. We'll use the characterization in terms of the partial order ≤ we used for the Zorn's Lemma proof of Theorem 2. If a point M is in Buc, then it can be written as M=MB+M∗, so M≤MB. Since all points added in Buc lie below a preexisting point in B (according to the partial order from Theorem 2) the set of maximals (ie, set of minimal points) is completely unchanged when we add all the new points to the partial order via upper completion, so Bmin=(Buc)min. For the second part, one direction is immediate. Bmin⊆B, so (Bmin)uc⊆Buc. For the reverse direction, take a point M∈Buc. It can be decomposed as MB+M∗, and then by Theorem 2, MB can be decomposed as Mmin+M′∗, so M=Mmin+(M∗+M′∗), so it lies in (Bmin)uc, and we're done.   **Theorem 3:** *If the nonempty closed convex sets*A*and*B*have*Amin≠Bmin*, then there is some*f∈C(X,[0,1])*where*EA(f)≠EB(f) Proof sketch: We show that upper completion is idempotent, and then use that to show that the upper completions of A and B are different. Then, we can use Hahn-Banach to separate a point of A from Buc (or vice-versa), and show that the separating functional is a positive functional. Finally, we use Theorem 1 to translate from a separating positive functional to different expectation values of some f∈C(X,[0,1]) Proof: Phase 1 is showing that upper completion is idempotent. (Buc)uc=Buc. One direction of this is easy, Buc⊆(Buc)uc. In the other direction, let M∈(Buc)uc. Then we can decompose M into M′+M∗, where M′∈Buc, and decompose that into MB+M′∗ where MB∈B, so M=MB+(M∗+M′∗) and M∈Buc. Now for phase 2, we'll show that the minimal points of one set aren't in the upper completion of the other set. Assume, for contradiction, that this is false, so Amin⊆Buc and Bmin⊆Auc. Then, by idempotence, Proposition 4, and our subset assumption, Auc=(Amin)uc⊆(Buc)uc=Buc Swapping the A and B, the same argument holds, so Auc=Buc, so (Buc)min=(Auc)min. Now, using this and Proposition 4, Bmin=(Buc)min=(Auc)min=Amin. But wait, we have a contradiction, we said that the minimal points of B and A weren't the same! Therefore, either Bmin⊈Auc, or vice-versa. Without loss of generality, assume that Bmin⊈Auc. Now for phase 3, Hahn-Banach separation to get a positive functional with different inf values. Take a point MB in Bmin that lies outside Auc. Now, use the Hahn-Banach separation of {MB} and Auc used in the proof of Proposition 2, to get a linear functional ϕ (which can be demonstrated to be a positive functional by the same argument as the proof of Proposition 2) where: ϕ(MB)<infM∈Aucϕ(M). Thus, infM∈Bϕ(M)<infM∈Aϕ(M), so infM∈Bϕ(M)≠infM∈Aϕ(M) Said positive functional can't be 0, otherwise both sides would be 0. Thus, by Theorem 1, ϕ((m,b))=a(m(f)+b) where a>0, and f∈C(X,[0,1]). Swapping this out, we get: inf(m,b)∈Ba(m(f)+b)≠inf(m′,b′)∈Aa(m′(f)+b′) inf(m,b)∈B(m(f)+b)≠inf(m′,b′)∈A(m′(f)+b′) and then this is EB(f)≠EA(f) So, we have crafted our f∈C(X,[0,1]) which distinguishes the two sets and we're done.   **Corollary 1:** *If two nonempty closed convex upper-complete sets*A*and*B*are different, then there is some*f∈C(X,[0,1])*where* EA(f)≠EB(f) Proof: Either Amin≠Bmin, in which case we can apply Theorem 3 to separate them, or their sets of minimal points are the same. In that case, by Proposition 4 and upper completion, A=Auc=(Amin)uc=(Bmin)uc=Buc=B and we have a contradiction because the two set are different.   **Theorem 4:** *If*H*is an infradistribution/bounded infradistribution, then*h:f↦EH(f)*is concave in*f*, monotone, uniformly continuous/Lipschitz,*h(0)=0,h(1)=1*, and if*range(f)⊈[0,1]*,*h(f)=−∞ Proof sketch: h(0)=0,h(1)=1 is trivial, as is uniform continuity from the weak bounded-minimal condition. For concavity and monotonicity, it's just some inequality shuffling, and for h(f)=∞ if f∈C(X),f∉C(X,[0,1]), we use upper completion to have its worst-case value be arbitrarily negative. Lipschitzness is much more difficult, and comprises the bulk of the proof. We get a duality between minimal points and hyperplanes in C(X)⊕R, show that all the hyperplanes we got from minimal points have the same Lipschitz constant upper bound, and then show that the chunk of space below the graph of h itself is the same as the chunk of space below all the hyperplanes we got from minimal points. Thus, h has the same (or lesser) Lipschitz constant as all the hyperplanes chopping out stuff above the graph of h. Proof: For normalization, h(1)=EH(1)=1 and h(0)=EH(0)=0 by normalization for H. Getting the uniform continuity condition from the weak-bounded-minimal condition on an infradistribution H is also trivial, because the condition just says f↦EH(f) is uniformly continuous, and that's just h itself. Let's show that h is concave over C(X,[0,1]), first. We're shooting for h(pf+(1−p)f′)≥ph(f)+(1−p)h(f′). To show this, h(pf+(1−p)f′)=EH(pf+(1−p)f′)=inf(m,b)∈H(m(pf+(1−p)f′)+b) =inf(m,b)∈H(p(m(f)+b)+(1−p)(m(f′)+b)) ≥pinf(m,b)∈H(m(f)+b′)+(1−p)inf(m′,b′)∈H(m′(f′)+b′) =pEH(f)+(1−p)EH(f′)=ph(f)+(1−p)h(f′) And concavity has been proved. Now for monotonicity. By Proposition 3 and Proposition 1, ∀f:inf(m,b)∈H(m(f)+b)=inf(m,b)∈Hmin(m(f)+b) Now, let's say f′≥f. Then: EH(f)=inf(m,b)∈H(m(f)+b)=inf(m,b)∈Hmin(m(f)+b)≤inf(m,b)∈Hmin(m(f′)+b) =inf(m,b)∈H(m(f′)+b)=EH(f′) And we're done. The critical inequality in the middle came from all minimal points in an infradistribution having no negative component by positive-minimals, so swapping out a function for a greater function produces an increase in value. Time for range(f)⊈[0,1]→h(f)=−∞. Let's say there exists an x s.t. f(x)>1. We can take an arbitrary sa-measure (m,b)∈H, and consider (m,b)+c(−δx,1), where δx is the point measure that's 1 on x, and c is extremely huge. The latter part is an sa-measure. But then,(m−cδx)(f)+(b+c)=m(f)+b+c(1−δx(f))=m(f)+b+c(1−f(x)). Since f(x)>1, and c is extremely huge, this is extremely negative. So, since there's sa-measures that make the function as negative as we wish in H by upper-completeness, inf(m,b)∈H(m(f)+b)=−∞ A very similar argument can be done if there's an x where f(x)<0, we just add in (cδx,0) to force arbitrarily negative values. Now for Lipschitzness, which is by far the worst of all. A minimal point (m,b) induces an affine function hm,b (kinda like a hyperplane) of the form hm,b(f)=m(f)+b. Regardless of (m,b), as long as it came from a minimal point in H, hm,b≥h for functions with range in [0,1], because hm,b(f)=m(f)+b≥inf(m,b)∈H(m(f)+b)=EH(f)=h(f) Ok, so if a point is on-or-below the graph of h over C(X,[0,1]), then it's on-or-below the graph of hm,b for all (m,b)∈Hmin. What about the other direction? Is it possible for a point (f,b′) to be strictly above the graph of h and yet ≤ all the graphs of hm,b? Well, no. Invoking Proposition 3, b′>h(f)=EH(f)=inf(m,b)∈H(m(f)+b)=inf(m,b)∈Hmin(m(f)+b)=inf(m,b)∈Hmin(hm,b(f)) So, there exists a minimal point (m,b)∈Hmin where b′>hm,b(f), so (f,b′) lies above the graph of hm,b. Putting these two parts together, h's hypograph over C(X,[0,1]) is the same as the intersection of the hypographs of all these hm,b. If we can then show all the hm,b have a Lipschitz constant bounded above by some constant, then we get that h itself is Lipschitz with the same constant. First, a minimal (m,b) must have m having no negative parts, so it can be written as λμ, and by bounded-minimals (since we have a bounded infradistribution), λ≤λ⊙. Now, |hm,b(f)−hm,b(f′)|=|m(f)+b−m(f′)−b|=|m(f−f′)|≤m(|f−f′|) =(λμ)(|f−f′|)=λ(μ)(|f−f′|)≤λsupx∈X|f(x)−f′(x)|≤λ⊙supx∈X|f(x)−f′(x)| So, we get that: |hm,b(f)−hm,b(f′)|supx∈X|f(x)−f′(x)|≤λ⊙supx∈X|f(x)−f′(x)|supx∈X|f(x)−f′(x)|=λ⊙ Note that supx∈X|f(x)−f′(x)| is our distance metric between functions in C(X). This establishes that regardless of which minimal point we picked, hm,b is Lipschitz with Lipschitz constant ≤λ⊙, and since h=inf(m,b)∈Hminhm,b, then h itself has the same bound on its Lipschitz constant.   **Lemma 5:** ∀m:inff∈C(X,[0,1])(m(f))=m−(1) Proof sketch: We'll work in the Banach space L1(|m|) of L1 measurable functions w.r.t the absolute value of the signed measure m. Then, we consider the discontinuous (but L1) function that's 1 everywhere where m is negative. Continuous functions are dense in L1 measurable functions, so we can fix a sequence of continuous functions limiting to said indicator function. Then we just have to check that f↦m(f) is a bounded linear functional, and we get that there's a sequence of continuous functions f′n where m(f′n) limits to the measure of the indicator function that's 1 where everything is negative. Which is the same as the measure of the "always 1" function, but only on the negative parts, and we're done. Consider the Banach space L1(|m|) of measurable functions w.r.t. the absolute value of the signed measure m, ie, |m|=m+−m−, which is a measure. It has a norm given by ||f||=∫X|f|d|m|. To begin with, we can consider the L1 indicator function 1m− that's 1 where the measure is negative. Note that m(1m−)=∫X1m−dm=∫X1m−dm++∫X1m−dm− =∫X0dm++∫X1dm−=∫X1dm−=m−(1) Because continuous functions are dense in L1, we can fix a sequence of continuous functions fn limiting to 1m−. Then, just clip those continuous functions to [0,1], making a continuous function f′n. They'll get *closer* to 1m− that way, so the sequence f′n of continuous functions X→[0,1] limits to 1m− too. We'll take a detour and show that m is a bounded linear functional L1(|m|)→R, with a Lipschitz constant of 1 or less. First, m(af+a′f′)=am(f)+a′m(f′), trivially, establishing linearity. As for the boundedness, if ||f||≤1, then ∫X|f|d|m|≤1, so: 1≥∫X|f|d|m|=∫Xsup(f,0)d|m|−∫Xinf(f,0)d|m| =∫Xsup(f,0)dm++∫Xsup(f,0)d|m−|−∫Xinf(f,0)dm+−∫Xinf(f,0)d|m−| =∣∣∫Xsup(f,0)dm+∣∣+∣∣−∫Xsup(f,0)d|m−|∣∣+∣∣∫Xinf(f,0)dm+∣∣+∣∣−∫Xinf(f,0)d|m−|∣∣ ≥∣∣∫Xsup(f,0)dm+−∫Xsup(f,0)d|m−|+∫Xinf(f,0)dm+−∫Xinf(f,0)d|m−|∣∣ =∣∣∫Xsup(f,0)dm+∫Xinf(f,0)dm∣∣=∣∣∫Xfdm∣∣=|m(f)| So, m(f)∈[−1,1]. An f having a norm of 1 or less gets mapped to a number with a norm of 1 or less, so the Lipschitz constant of f↦m(f) is 1 or less. This implies continuity. Now that we have all requisite components, fix some ϵ. There's some n where, for all greater n, d(1m−,f′n)<ϵ. Mapping them through f↦m(f), due to having a Lipschitz constant of 1 or less, then means that ϵ>|m(f′n)−m(1m−)|=m(f′n)−m(1m−)=m(f′n)−m−(1) because the value of 1-but-only-on-negative-parts is as-or-more negative than f′n on the measure, due to f′n being bounded in [0,1]. Summarizing, ϵ>m(f′n)−m−(1) for all n beyond a certain point, so, for all n beyond a certain point, m(f′n)<ϵ+m−(1) So we have a sequence of functions in C(X,[0,1]) where m(f′n) limits to m−(1), and our signed measure was arbitrary. Therefore, we have our result that ∀m:inff∈C(X,[0,1])m(f)=m−(1).   **Theorem 5:** If h is a function C(X)→R that is concave, monotone, uniformly-continuous/Lipschitz, h(0)=0, h(1)=1, and range(f)∉[0,1]→h(f)=−∞, then it specifies a infradistribution/bounded infradistribution by: {(m,b)|b≥(h′)∗(m)}, where h′ is the function given by h′(−f)=−h(f), and (h′)∗ is the convex conjugate of h′. Also, going from a infradistribution to an h and back recovers exactly the infradistribution, and going from an h to a infradistribution and back recovers exactly h. Proof sketch: This is an extremely long one. Phase 1 and 2 is showing isomorphism. One direction is reshuffling the definition of H until we get the definition of the set built from h′ via convex conjugate, showing that going H to h and back recovers your original set. In the other direction, we show that expectations w.r.t the set we built from H match up with h exactly. Phase 3 is cleanup of the easy conditions. Nonemptiness is pretty easy to show, the induced set being a set of sa-measures is harder to show and requires moderately fancier arguments, and closure and convexity require looking at basic properties of functions and the convex conjugate. Upper completeness takes some equation shuffling to show but isn't too bad. The weak-minimal bound property is immediate, and normalization is fairly easy.  That just leaves the positive-minimal property and the bounded-minimal properties, respectively, which are nightmares. A lesser nightmare and a greater nightmare. For phase 4 to lay the groundwork for these, we establish an isomorphism between points in H and hyperplanes which lie above the graph of h, as well as a way of certifying that a point in H isn't minimal by what its hyperplane does. Phase 5 is, for showing positive-minimals, we can tell whether a hyperplane corresponds to an a-measure, and given any hyperplane above the graph of h, construct a lower one that corresponds to a lower point in H that *does* correspond to an a-measure Phase 6 is, for bounded-minimals, we take a hyperplane that *may* correspond to a minimal point, but which is too steep in certain directions. Then, we make an open set that fulfills the two roles of: if you enter it, you're too steep, or you overshoot the hyperplane of interest that you're trying to undershoot. Some fancy equation crunching and one application of Hahn-Banach later, we get a hyperplane that lies above h and doesn't enter our open set we crafted. So, in particular, it undershoots our hyperplane of interest, and isn't too steep. This certifies that our original "too steep" hyperplane didn't actually correspond to a minimal point, so all minimal points must have a bound on their λ values by the duality between hyperplanes above h and points in H. Fix the convention that supf or inff is assumed to mean f∈C(X), we'll explicitly specify when f has bounds. Phase 1: Let's show isomorphism. Our first direction is showing H to h and back is H exactly. By upper completion, and Proposition 2, we can also characterize H as {M|∀f+∃M′∈H:f+(M)≥f+(M′)} Using Theorem 1 to express all positive functionals as arising from an f∈C(X,[0,1]), and observing that the a constant in front doesn't change which stuff scores lower than which other stuff, so we might as well characterize everything in terms of f, H can also be expressed as {(m,b)|∀f∈C(X,[0,1]):m(f)+b≥inf(m′,b′)∈H(m′(f)+b′)} We can swap out C(X,[0,1]) for C(X), because, from the −∞ argument in Theorem 4, f going outside [0,1] means that inf(m′,b′)∈H(m′(f)+b′)=−∞. And then, our H can further be reexpressed as {(m,b)|∀f:m(f)+b≥EH(f)}={(m,b)|∀f:b≥EH(f)−m(f)} ={(m,b)|b≥supf(EH(f)−m(f))} Also, EH(f)=h(f)=−h′(−f), so we can rewrite this as: {(m,b)|b≥sup−f(m(−f)−h′(−f))}={(m,b)|b≥supf(m(f)−h′(f))} and, by the [definition of the convex conjugate](https://en.wikipedia.org/wiki/Convex_conjugate)(sup characterization) and the space of finite signed measures being the dual space of C(X), and m(f) being a functional applied to an element, this is {(m,b)|b≥(h′)∗(m)} So, our original set H is identical to the convex-conjugate set, when we go from H to h back to a set of sa-measures. Proof Phase 2: In the reverse direction for isomorphism, assume that h fulfills the conditions. We want to show that E{(m,b)|b≥(h′)∗(m)}(f)=h(f), so let's begin. E{(m,b)|b≥(h′)∗(m)}(f)=inf(m,b):b≥(h′)∗(m)(m(f)+b) Given an m, we have a natural candidate for minimizing the b, just set it equal to (h′)∗(m). So then we get infm(m(f)+(h′)∗(m))=infm((h′)∗(m)−m(−f)) And this is just... −(h′)∗∗(−f) (proof by [Wikipedia article](https://en.wikipedia.org/wiki/Convex_conjugate), check the inf characterization), and, because h is continuous over C(X,[0,1]), and concave, and −∞ everywhere outside the legit functions then h′ is continuous over C(X,[−1,0]), and convex, and ∞ everywhere outside the legit functions, so in particular, h′ is convex and lower-semicontinuous and proper, so h′=(h′)∗∗ by the [Fenchel-Moreau Theorem.](https://en.wikipedia.org/wiki/Fenchel%E2%80%93Moreau_theorem) From that, we get E{(m,b)|b≥(h′)∗(m)}(f)=−(h′)∗∗(−f)=−h′(−f)=h(f) and we're done with isomorphism. Now that isomorphism has been established, let's show the relevant conditions hold. Namely, nonemptiness, closure, convexity, upper completion, normality, weak-bounded-minimals (phase 3) and positive-minimals (phase 5) and bounded-minimals (assuming h is Lipschitz) (phase 6) to finish off. The last two will be extremely hard. Begin phase 3. Weak-bounded-minimals is easy by isomorphism. For our H′ we constructed, if f→EH′(f) wasn't uniformly continuous, then because EH′(f) equals h(f), we'd get a failure of uniform continuity for h, which was assumed. By the way, the convex conjugate, (h′)∗(m), can be expressed as (by Wikipedia, sup charaacterization) supf(m(f)−h′(f))=supf(m(−f)−h′(−f))=supf(h(f)−m(f)) We can further restrict f to functions with range in [0,1], because if it was anything else, we'd get −∞. We'll be using (h′)∗(m)=supf∈C(X,[0,1])(h(f)−m(f)) (or the supf variant) repeatedly. For nonemptiness, observe that (0,1) is present in H′ because, fixing an arbitrary f, (h′)∗(0)=supf∈C(X,[0,1])(h(f)−0(f))=supf∈C(X,[0,1])h(f)=1 This is from our format of the convex conjugate, and h being normalized and monotone, so the highest it can be is 1 and it attains that value. Therefore, 1≥(h′)∗(0), so (0,1) is in the H′ we constructed. For showing that our constructed set H′ lies in Msa(X), we have that, for a random (m++m−,b)∈H′, it has (by our characterization of (h′)∗(m)) b+m−(1)≥supf∈C(X,[0,1])(h(f)−(m++m−)(f))+m−(1) ≥supf∈C(X,[0,1])(−(m++m−)(f))+m−(1) =m−(1)−inff∈C(X,[0,1])((m++m−)(f))=m−(1)−m−(1)=0 This is by the lower bound on b being (h′)∗(m++m−) and unpacking the convex conjugate, h(f)≥h(0)=0 by monotonicity and normalization, a reexpression of sup, and Lemma 5, respectively. b+m−(1)≥0 so it's an sa-measure. For closure and convexity, by monotonicity of h, we have  0=−h(0)≥−h(f)≥−h(1)=−1 and h is continuous on C(X,[0,1]), concave, and −∞ everywhere else by assumption, so h′ is proper, continuous on C(X,[−1,0]), convex, and lower-semicontinuous in general because of the ∞ everywhere else, so, by the [Wikipedia page](https://en.wikipedia.org/wiki/Closed_convex_function) on "Closed Convex Function", h′ is a closed convex function, and then by the [Wikipedia page](https://en.wikipedia.org/wiki/Convex_conjugate) on "Convex Conjugate" in the Properties section, (h′)∗ is convex and closed. From the Wikipedia page on "Closed Convex Function", this means that the epigraph of (h′)∗ is closed, and also the epigraph of a convex function is convex. This takes care of closure and convexity for our H′ Time for upper-completeness. Assume that (m,b) lies in the epigraph. Our task now is to show that (m,b)+(m∗,b∗) lies in the epigraph. This is equivalent to showing that b+b∗≥(h′)∗(m+m∗). Note that b∗≥−m∗−(1), because (m∗,b∗) is an sa-measure. Let's begin. (h′)∗(m+m∗)=supf∈C(X,[0,1])(h(f)−(m+m∗)(f)) =supf∈C(X,[0,1])(h(f)−m(f)−m∗+(f)−m∗−(f))≤supf∈C(X,[0,1])(h(f)−m(f)+b∗) =b∗+supf∈C(X,[0,1])(h(f)−m(f))=b∗+(h′)∗(m)≤b∗+b This was done by unpacking the convex conjugate, splitting up m∗ into m∗+ and m∗−, locking two of the components in the sup to be an upper bound (which also gives the sup more flexibility on maximizing the other two components, so this is greater), packing up the convex conjugate, and using that b≥(h′)∗(m) because (m,b)∈H′ Normalization of the resulting set is easy. Going from h to a (maybe)-inframeasure H′ back to h is identity as established earlier, so all we have to do is show that a failure of normalization in a (maybe)-inframeasure makes the resulting h not normalized. Thus, if our h is normalized, and it makes an H′ that isn't normalized, then going back makes a non-normalized h, which contradicts isomorphism. So, assume there's a failure of normalization in H′. Then EH′(0)≠0, or EH′(1)≠1, so either h(0)≠0 or h(1)≠1 and we get a failure of normalization for h which is impossible. So H′ must be normalized. Begin phase 4. First, continuous affine functionals ϕ that lie above the graph of h perfectly correspond to sa-measures in H′. This is because the continuous dual space of C(X) is the space of finite signed measures, so we can interpret ϕ−ϕ(0) as a finite signed measure, and ϕ(0) as the b term. In one direction, given an (m,b)∈H′, ϕ(f)=m(f)+b≥inf(m,b)∈H′(m(f)+b)=EH′(f)=h(f) so every point in H′ induces a continuous affine functional C(X)→R whose graph is above h. In the other direction, from earlier, we can describe H′ as: {(m,b)|b≥supf(h(f)−m(f))} and then, for (ϕ−ϕ(0),ϕ(0)), supf(h(f)−(ϕ−ϕ(0))(f))=supf(h(f)−ϕ(f)+ϕ(0))≤ϕ(0) because ϕ(f)≥h(f). So continuous affine functionals whose graph lies above the graph of h correspond to points in H′. So, we have a link between affine functionals that lie above the graph of h, and points in H′. What would a minimal point correspond to? Well, a non-minimal point corresponds to (m,b)+(m∗,b∗), where the latter component is nonzero. There's *some* f+ where f+((m,b)+(m∗,b∗))>f+(m,b) due to the latter component being nonzero, and for all f+, f+((m,b)+(m∗,b∗))≥f+(m,b). Using Theorem 1 to translate positive functionals to f, this means that the ϕ induced by (m,b) lies *below* the affine functional induced by (m,b)+(m∗,b∗) over the f∈C(X,[0,1]). So, if there's a different affine functional ψ s.t. ∀f∈C(X,[0,1]):h(f)≤ψ(f)≤ϕ(f), then ϕ must correspond to a nonminimal point. Further, we can characterize whether ϕ corresponds to an a-measure or not. For a measure, if you increase your function you're feeding in, you increase the value you get back out, f′≥f→ϕ(f′)≥ϕ(f). For a signed measure with some negative component, Lemma 5 says we can find some f′∈C(X,[0,1]) that attain negative value, so you can add one of those f′ to your f and get ϕ(f+f′)<ϕ(f). So, a ϕ corresponds to an a-measure exactly when it's monotone. Phase 5: Proving positive-minimals. With these links in place, this means we just have to take any old point that's an sa-measure in H′, get a ϕ from it, it'll fulfill certain properties, and use those properties to find a ψ that lies below ϕ and above h on C(X,[0,1]) and is monotone, certifying that ψ corresponds to a point *below* our minimal-point of interest that's still in H′ but is an a-measure, so we have a contradiction. To that end, fix a ϕ that corresponds to some point in H′ that's not an a-measure (in particular, it has a negative component), it lies above the graph of h. Now, translate ϕ to a (mϕ,bϕ), where bϕ=ϕ(0), and mϕ(f)=ϕ(f)−ϕ(0). ϕ is minimized at some f. Since our ϕ corresponds to something that's not an a-measure, (mϕ)−(1)<0  Let our affine continuous functional ψ be defined as ψ(f)=(mϕ)+(f)+ϕ(0)+(mϕ)−(1).  In order to show that ψ corresponds to an a-measure below (mϕ,bϕ) in H′, we need three things. One is that ψ is monotone (is an a-measure), two is that it lies below ϕ over C(X,[0,1]) and three is that it lies above h. Take note of the fact that ϕ(0)+(mϕ)−(1)≥0, because ϕ(0)=bϕ. For monotonicity of ψ, it's pretty easy. If f′≥f, then ψ(f′)=ψ(f+(f′−f))=(mϕ)+(f+(f′−f))+ϕ(0)+(mϕ)−(1) ≥(mϕ)+(f)+ϕ(0)+(mϕ)−(1)=ψ(f) and we're done with that part. For being less than or equal to ϕ over C(X,[0,1]) (we know it's not the same as ϕ because ϕ isn't monotone and ψ is), ψ(f)=(mϕ)+(f)+ϕ(0)+(mϕ)−(1)≤(mϕ)+(f)+ϕ(0)+(mϕ)−(f) =mϕ(f)+ϕ(0)=ϕ(f)−ϕ(0)+ϕ(0)=ϕ(f) For being ≥h over C(X,[0,1]) it takes a somewhat more sophisticated argument. By Lemma 5, regardless of ϵ, there exists a f′ where mϕ(f′)<(mϕ)−(1)+ϵ. Then, we can go: ψ(f)+ϵ>ψ(f)+mϕ(f′)−(mϕ)−(1) =(mϕ)+(f)+ϕ(0)+(mϕ)−(1)+mϕ(f′)−(mϕ)−(1)=(mϕ)+(f)+ϕ(0)+mϕ(f′) =(mϕ)+(f+f′)+ϕ(0)+(mϕ)−(f′)≥(mϕ)+(sup(f,f′))+ϕ(0)+(mϕ)−(sup(f,f′)) =mϕ(sup(f,f′))+ϕ(0)=ϕ(sup(f,f′))≥h(sup(f,f′))≥h(f) The last steps were done via the definition of ϕ, ϕ≥h, and h being monotonic. So, ψ(f)+ϵ>h(f) for all ϵ and all f∈C(X,[0,1]) getting ψ(f)≥h(f) for f∈C(X) (because h is −∞ everywhere else) Thus, ψ specifies an a-measure (ψ being monotone) that is below the sa-measure encoded by ϕ (by ϕ≥ψ over C(X,[0,1])), yet ψ≥h, so said point is in H′. This witnesses that there can be no minimal points in H′ that aren't a-measures. That just leaves getting the slope bound from Lipschitzness, the worst part of this whole proof. Phase 6: Let λ⊙ be the Lipschitz constant for h. Fix a ϕ that corresponds to a minimal point with λ>λ⊙. This violates the Lipschitz bound when traveling from 0 to 1, so the Lipschitz bound is violated in some direction. Further, the graph of ϕ touches the graph of h at some point f∗∈C(X,[0,1]), because if it didn't, you could shift ϕ down further until it *did* touch, witnessing that the point ϕ came from wasn't minimal (you could sap more from the b term). Now, if this point is minimal, it should be *impossible* to craft a ψ which is ≤ϕ over C(X,[0,1]), ≥h, and different from ϕ. We shall craft such a ψ, witnessing that said point isn't actually minimal. Further, said ψ won't violate the Lipschitz bound in any direction. Thus, all affine functionals corresponding to minimal points must obey the Lipschitz bound and be monotone, so they're a-measures with λ≤λ⊙. In order to do this, we shall craft three sets in C(X)⊕R. A, B1, and B2. Set A is {(f,b)|f∈C(X,[0,1]),b≤h(f)}. Pretty much, this set is the hypograph of h. It's obviously convex because h is concave, and the hypograph of a concave function is convex. It's closed because h is continuous. Set B1 is {(f,b)|f∈C(X,(0,1)),b>ϕ(f)}. This could be thought of as the the interior of the epigraph of ϕ restricted to C(X,[0,1]). Undershooting this means you never exceed ϕ over C(X,[0,1]). First, it's open. This is because, due to f being continuous over a compact set X, the maximum and minimum are attained, so any f∈C(X,(0,1)) is bounded below 1 and above 0, so we've got a little bit of room to freely wiggle f in any direction. Further, since ϕ−ϕ(0) is a continuous linear functional on C(X) which is a Banach space, it's a bounded linear functional and has *some* Lipschitz constant (though it may exceed λ⊙), so we have a little bit of room to freely wiggle b as well. So B1 is open. Also, B1 is convex, because a mixture of f and f′ that are bounded away from 0 and 1 is also bounded away from 0 and 1, and pb+(1−p)b′>pϕ(f)+(1−p)ϕ(f′)=ϕ(pf+(1−p)f′). Set B2 is {(f,b)|b>λ⊙d(f,f∗)+ϕ(f∗)}. This could be thought of as an open cone with a point (it's missing that exact point, though) at (f∗,ϕ(f∗)), that opens straight up, and certifies a failure of the λ⊙ bound on slope. If an affine function includes the point (f∗,ϕ(f∗)) in its graph, then if it increases faster than λ⊙ in any direction, it'll land in this set. It's open because, given a point in it, we can freely wiggle the f and b values around a little bit in any direction, and stay in the set. Now we'll show it's convex. Given an (f,b) and (f′,b′) in it, due to C(X) being a Banach space (so it has a norm), we want to check whether pb+(1−p)b′>λ⊙d(pf+(1−p)f′,f∗)+ϕ(f∗). Observe that (using the defining axioms for a norm) pb+(1−p)b′>p(λ⊙d(f,f∗)+ϕ(f∗))+(1−p)(λ⊙d(f′,f∗)+ϕ(f∗)) =λ⊙(pd(f,f∗)+(1−p)d(f′,f∗))+ϕ(f∗)=λ⊙(p||f−f∗||+(1−p)||f′−f∗||)+ϕ(f∗) =λ⊙(||pf−pf∗||+||(1−p)f′−(1−p)f∗||)+ϕ(f∗) ≥λ⊙(||pf−pf∗+(1−p)f′−(1−p)f∗||)+ϕ(f∗) =λ⊙(||pf+(1−p)f′−f∗||)+ϕ(f∗)=λ⊙d(pf+(1−p)f′,f∗)+ϕ(f∗) So, B2 is convex. Ok, so we've got a convex closed set and two convex opens. Now, consider B:=c.h(B1∪B2). The convex hull of an open set is open. We will show that A∩B=∅. Assume this is false, and that they overlap. The point where they overlap, can then be written as a convex mixture of points from B1∪B2. However, B1 and B2 are both convex, so we can reduce it to a case where we're mixing one point (f,b) from B1 and one point (f′,b′) in B2. And (pf+(1−p)f′,pb+(1−p)b′)∈A. If p=0, then we've just got a single point in B2. Also, ϕ(f∗)=h(f∗). b′>λ⊙d(f′,f∗)+ϕ(f∗)=λ⊙d(f′,f∗)+h(f∗)≥h(f′) This is because ϕ(f∗)=h(f∗) and h has a Lipschitz constant of λ⊙, so it can't increase as fast as we're demanding as we move from f∗ to f′, which stays in C(X,[0,1]). So (f′,b′)∉A. If p=1, then we've just got a single point in B1. Then b>ϕ(f)≥h(f), so again, (f,b)∉A. For the case where p isn't 0 or 1, we need a much more sophisticated argument. Remembering that (f,b)∈B1, and (f′,b′)∈B2, we will show that (pf+(1−p)f∗,pb+(1−p)ϕ(f∗)) lies *strictly* above the graph of h. Both f and f∗ lie in C(X,[0,1]), so their mix lies in the same set, so we don't have to worry about h being undefined there. Also, remember that ϕ≥h over C(X,[0,1]). Now, pb+(1−p)ϕ(f∗)>pϕ(f)+(1−p)ϕ(f∗)=ϕ(pf+(1−p)f∗)≥h(pf+(1−p)f∗) The critical > is by the definition of B1, and (f,b)∈B1. So, the b term is strictly too high for this point (different than the one we care about) to land on the graph of h. With the aid of this, we will consider "what slope do we have as we travel from (pf+(1−p)f∗,pb+(1−p)ϕ(f∗)) to (pf+(1−p)f′,pb+(1−p)b′)"? Said slope is (pb+(1−p)b′)−(pb+(1−p)ϕ(f∗))d(pf+(1−p)f′,pf+(1−p)f∗)=(1−p)(b′−ϕ(f∗))||(pf+(1−p)f′)−(pf+(1−p)f∗)|| =(1−p)(b′−ϕ(f∗))(1−p)||f′−f∗||=b′−ϕ(f∗)d(f′,f∗)>λ⊙d(f′,f∗)+ϕ(f∗)−ϕ(f∗)d(f′,f∗)=λ⊙ That critical > is by (f′,b′)∈B2 and the definition of B2.  So, if we start at (pf+(1−p)f∗,pb+(1−p)ϕ(f∗)) (and pf+(1−p)f∗ lies in C(X,[0,1])), we're above the graph of h. Then, we travel to (pf+(1−p)f′,pb+(1−p)b′), where pf+(1−p)f′∈C(X,[0,1]) by assumption that this point is in A, but while doing this, we ascend *faster* than λ⊙, the Lipschitz constant for h. So, our point of interest (pf+(1−p)f′,pb+(1−p)b′) lies above the graph of h and can't lie in A, and we have a contradiction. Putting all this together, A∩B=∅. Since B is open, and they're both convex and nonempty, we can invoke [Hahn-Banach](https://en.wikipedia.org/wiki/Hahn%E2%80%93Banach_theorem) (first version of the theorem in the "Separation of Sets" section)and conclude they're separated by some continuous linear functional ψL. Said linear functional must increase as b does, because (0,0)∈A, and (0,b) (for some sufficiently large b) lies in B2, thus in B. This means that given any f and a∈R to specify a level, we can find a unique b where ψL(f,b)=a. So, any level set of this continuous linear functional we crafted can also be interpreted as an affine functional. There's a critical value of the level set that achieves the separation, ψL(f∗,ϕ(f∗)). This is because (f∗,ϕ(f∗))=(f∗,h(f∗))∈A, but (f∗,ϕ(f∗)+ϵ) is in B2, thus in B, for all ϵ. So we've uniquely pinned down which affine function ψ we're going for. Since the graph of ψ is a hyperplane separating A and B (It may touch the set A, just not cut into it, but it doesn't touch B), from looking at the definitions of A and B1 and B2, we can conclude: From the definition of A, ψ(f)≥h(f), so ψ≥h over C(X,[0,1]). From the definition of B1, ψ(f)≤ϕ(f) over C(X,(0,1)), and they're both continuous, so we can extend ψ(f)≤ϕ(f) to C(X,[0,1]) by continuity, so ψ≤ϕ over C(X,[0,1]). Also, h(f∗)≤ψ(f∗)≤ϕ(f∗)=h(f∗), so ψ(f∗)=ϕ(f∗), and this, paired with the ability of B2 to detect whether an affine function exceeds the λ⊙ slope bound (as long as the graph of said function goes through (f∗,ϕ(f∗))), means that the graph of ψ not entering B2 certifies that its Lipschitz constant is λ⊙ or less. Since \phi *does* enter B2 due to violating the Lipschitz constant bound, this also certifies that ϕ≠ψ. Putting it all together, given a ϕ which corresponds to a minimal point and violates the Lipschitz bound, we can find a ψ below it that's also above h, so said minimal point isn't actually minimal. Therefore, if you were to translate a minimal point in the induced H into an affine function above h, it'd have to A: not violate the Lipschitz bound (otherwise we could undershoot it) and B: be monotone (otherwise we could undershoot it). Being monotone certifies that it's an a-measure, and having a Lipschitz constant of λ⊙ or less certifies that the λ of the a-measure is λ⊙ or less. We're finally done! The next proofs [are here.](https://www.alignmentforum.org/posts/b9jubzqz866CModHB/proofs-section-1-2-mixtures-updates-pushforwards)
45a1ec74-9328-4d99-8be8-44c8c8fc68e1
trentmkelly/LessWrong-43k
LessWrong
[Question] Tracking accuracy of personal forecasts I've been thinking how I can improve my accuracy predicting events of personal interest (e.g., "Will my landlord get the washing machine fixed within the next two weeks", or "Will my parent die this year" for a more extreme example). Betting markets will not help me with that. At first I thought about creating dedicated software that gathers such predictions, the final outcomes of predicted events, and presents their accuracy so that the user can spot bias. Then I realised a simple spreadsheet might suffice to gather data at first and assess how useful this is. And if the need arises in the future, it should be easy to import into dedicated software, provided that all the relevant data is already there. Does anyone track their personal predictions? If so, what methodology do you use, and did it allow you to improve your accuracy? As an RFC, here's the spreadsheet layout I have on mind: * Tags: (value 0 or 1): * Health * Finance * Interpersonal relations * ... * Date of the forecast * Event (e.g., "My landlord will get the washing machine fixed within the next two weeks"). I'm planning to formulate them so that "yes" is always the desired outcome, so that it's easy to spot if I'm reliably too optimistic or pessimistic. * Estimated probability * Deadline of the forecast * Outcome (value 0 or 1, filled after the deadline of the forecast, or when the answer is known sooner)
16543a0b-3a94-4bc5-8f53-e3d2f4f122c3
trentmkelly/LessWrong-43k
LessWrong
AGI Timelines in Governance: Different Strategies for Different Timeframes Summarization Table TimelinesPre-2030Post-2030ExpectationsAGI will be built by an organization that’s already trying to build it (85%) Some governments will be in the race (80%)   Compute will still be centralized at the time AGI is developed (60%)More companies will be in the race (90%)National government policy won’t have strong positive effects (70%) China is more likely to lead than pre-2030 (85%)   The best strategies will have more variance (75%)There will be more compute suppliers[1] (90%)Comparatively More Promising Strategies (under timelines X)[2]Aim to promote a security mindset in the companies currently developing AI (85%)Focus on general community building (90%)Focus on corporate governance (75%)Build the AI safety community in China (80%)Target outreach to highly motivated young people and senior researchers (80%)Avoid publicizing AGI risk (60%)Coordinate with national governments (65%)Beware of large-scale coordination efforts (80%) Probability estimates in the "Promising Strategies" category have to be interpreted as the likelihood that this strategy/consideration is more promising/important under timelines X than timelines Y. Introduction Miles Brundage recently argued that AGI timeline discourse might be overrated. He makes a lot of good points, but I disagree with one thing. Miles says: “I think the correct actions are mostly insensitive to timeline variations.” Unlike Miles, I think that if the timeline differences are greater than a couple of years, the choice of actions does depend on timeline differences[3]. In particular, our approach to governance should be very different depending on whether we think that AGI will be developed in ~5-10 years or after that. In this post, I list some of the likely differences between a world in which AGI is developed before, and after ~2030 and discuss how those differences should affect how we approach AGI governance. I discuss most of the strategies and considerations in relative terms, i.e. argui
a1ac8c48-625a-4f3e-b68d-19439b378a9e
trentmkelly/LessWrong-43k
LessWrong
Breaking the SIA with an exponentially-Sleeping Beauty In this post I'd like to construct an extension to the Sleeping Beauty problem which is unsolvable from an thirder/SIA perspective. The core of the construction is having the number of wakings Beauty will experience follow a distribution with an infinite mean. This leads to Beauty being unable to assign normalized probability values when reasoning under SIA. Given this forum's fascination with anthropic reasoning and this problem, this may be of interest. The original Sleeping Beauty problem The following is a summary of the thirder/SIA approach to the original Sleeping Beauty problem. From what I can tell, this forum is very well versed here, so I won't go into too much background detail. In the original problem, we have the following two steps for the experimenter: 1. Flip a coin. If heads, record the value N=0. If tails, record the value N=1. 2. Put Beauty to sleep [on Sunday]. She will be woken once on each of the following N+1 days, with memory erased between wakings. (i.e. once if heads, twice if tails) Let Mk be the proposition "I am awake and today is the kth day since the experiment began". (I.e. M1 is "I am awake and it is Monday"; M2 is "I am awake and it is Tuesday".) Also let M be the proposition "I am awake", in the sense that M=M1∨M2. The problem is then to give Beauty a value to assign to the conditional probability of heads given that she has been awoken, i.e. to P(N=0∣M). One version of the thirder/SIA calculation for this value then proceeds as follows. P(N=0∣M)P(N=1∣M)=P(M∣N=0)P(M∣N=1)P(N=0)P(N=1)=P(M1∣N=0)+P(M2∣N=0)P(M1∣N=1)+P(M2∣N=1)P(N=0)P(N=1)=P(M1∣N=0)+02∗P(M1∣N=1)P(N=0)P(N=1)=12P(N=0∣M1)P(N=1∣M1)=121/21/2=12. Where the steps are respectively due to: 1. Bayes' law. 2. That M=M1∨M2 and that M1, M2 are disjoint propositions. 3. For the numerator, that M2 is impossible given N=0 (i.e. we cannot wake up on Tuesday if the coin came up heads). For the denominator, this is the part of the SIA assumption: given that the coin came up
bbf0bc1e-5ec3-4116-9415-a33e4cd61bb3
trentmkelly/LessWrong-43k
LessWrong
What do we know about how much protection COVID vaccines provide against long COVID? None
b5585da3-a0fb-497e-8022-e47eff12f87c
trentmkelly/LessWrong-43k
LessWrong
Duels & D.Sci March 2022: Evaluation and Ruleset This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself. There is a web interactive here where you can test your submission against your rival's deck (or against a couple other NPCs I added for your amusement, if you think yourself mighty enough to challenge those who wield the legendary power of the Sumerian Souvenirs).   NOTE: Win rates in the interactive are Monte-Carlo-d with small sample sizes and should be taken as less accurate than the ones in the leaderboard. RULESET Code is available here for those who are interested. A game is played as follows: * There are up to 6 turns in the game. * Each turn, each player simultaneously draws 2 cards from their deck.  (So over the full duration of the game, each player will draw their entire deck.) * Then, each player will simultaneously play either 0 or 1 of those cards: * Every card has a mana cost, ranging from 1 to 6. * On turn X, you can play a card that has a cost of X or less. * If both your cards are valid plays, you will play the higher-cost one. * For example, if you draw a 2-cost and a 4-cost card: * If it is Turn 1, you will play nothing. * If it is Turn 2 or 3, you will play the 2-cost card. * If it is Turn 4 -6, you will play the 4-cost card. * Any cards you didn't play are discarded - you will not be able to play them on later turns. * Note that: * You cannot play two cards in the same turn, even if you would in theory have enough mana to do so. * You cannot be clever about which card you play.  In some cases a lower-cost card might be better than a higher-cost card: you will play the higher-cost one anyway if you can.  If you draw two cards of the same cost, you will play one at random. * After both players have played a card if they can, each player calculates their Total Power by adding up the Power of all cards they have on their board. * If one player's Total Power ex
ca68c37d-cf8f-4e72-868b-4571d48be20e
trentmkelly/LessWrong-43k
LessWrong
Why do we like stories? Some people like music. Others like visual art. Yet regardless of difference in age, gender and cultural backgrounds, everyone likes hearing a story. Whether it’s fantasized ‘Harry Potter’, romantic ‘Pride and Prejudice’, tragic 'Hamlet', or a beloved classic 'Journey to The West', man and women around the world are enchanted by their spell. Story is one of the most universal art forms. Our cultures are built on myths, and religions are passed on through legends. A curious person would ask: Why do we like stories? What Is a Story? Before we try to find the origin of stories, we might first want to know what a story is. A cogent definition of story is given by Randy Olson, a scientist turned filmmaker who now teaches other scientists how to tell stories. He created a simple method for telling stories, called ABT—which stands for “And, But, Therefore”. These three words capture the basic structure of a story. For example, we can tell the story of "The Wizard of Oz" : "A little girl living on a farm in Kansas AND her life is boring, BUT one day a tornado sweeps her away to the land of Oz, THEREFORE she must undertake a journey to find her way home." [1] This is idea is not entirely new: Aristotle, the philosopher and one of the first story analyzers, recognized that every story contains a three-act structure: Beginning, Middle, and End. The structure roughly corresponds to Olson’s "And", "But" and "Therefore". However, the advantages of Olson's idea are that the words are simple (they are among the most used words in English) and that each word has a meaning signifying its function in the story. "AND" connects relevant information to introduce the story; "BUT" brings in conflict; "Therefore" resolves the conflict and concludes the story. Science and Story While teaching storytelling to scientists over the years, Olson recognized that there is a similarity between ABT and a scientific paper. Most scientific papers have a structure of: Introduction, Method, Results
7a8eb78a-3556-4d69-8294-e1c282da7a23
trentmkelly/LessWrong-43k
LessWrong
Curing Sleep: My Experiences Doing Cowboy Science Epistemic status: Subjective report describing youthful exuberance. The actual experimentation part of this was very badly executed.   In light of the recent homemade peptide vaccine, this is a writeup of my experiences in insufflating internet chemicals.  This happened during a period of my life when I was very bored, kinda rich, and getting really into darknet libertarian culture. This is cowboy science, and upon reflection, I was insufficiently concerned about either of legal troubles or brain damage.    Background Orexin (aka hypocretin-α) is a neuropeptide which promotes wakefulness, weight loss, and happiness. Damage to the orexinergic system seems to cause Type 1 Narcolepsy—comorbid with sleepfulness, obesity, and depression. Deadwyler et al attempted to prove a causal relationship, by sleep-depriving rhesus monkeys and having them insufflate orexin. It appears that sleep-deprived monkeys snorting orexin perform as well on cognitive tests as the non-sleep-deprived monkeys.  So there I was, a graduate student reading through this paper, and you can hear the gears in my head turning.  1. Orexin insufflation appears to promote wakefulness in sleep-deprived primates. 2. I am a sleep-deprived primate. 3. ...     Transhuman Greed Will this cause some kind of...anti-narcolepsy? Have I found a cure for sleep itself??  I was properly embracing More Dakka. But this Kerbal Space Program-tier science thought wasn't as nuts as it seemed. This was 2012, the orexin research was fresh, and my neuroscience professors were still touting the party line of “perhaps the only purpose of sleep is to keep humans indoors and conserving calories during the night, because their comparative advantage is day. We don't even know why, or even if, we really need sleep!"  (We didn't know about sleep-as-memory-post-processing, or that, during sleep, the brain lobes spread apart to let the CSF pressure-wash away all of the metabolite byproducts. We just thought sleep was one
9d919115-31da-4bb8-a99a-3bbc06f12f1b
trentmkelly/LessWrong-43k
LessWrong
Why was the AI Alignment community so unprepared for this moment? > Our epistemic rationality has probably gotten way ahead of our instrumental rationality > > -Scott Alexander > > A Lesswrong Crypto autopsy   This is a question post:  Why was the AI Alignment community so unprepared for engaging with the wider world when the moment finally came? EDIT Based on comment feedback: This is a genuine question about why something that is so obvious now with hindsight bias, was not clear back then and understand why not. Not an attempt to cast blame on any person or group. I have been a LW reader for at least 10 years, but I confess that until the last ~1.5 years I mostly watched the AI alignment conversation float by. I knew of the work, but I did not engage with the work. Top people were on it, and I had nothing valuable to add. All that to say: Maybe this has been covered before and I have missed it in the archives. Lately (throughout this year), there have been a flurry of posts essentially asking: How do we get better at communicating to and convincing the rest of the world about the dangers of AI alignment?    * Catching the Eye of Sauron * An AI Realist Manifesto * The Social Alignment Problem All three of which were posted in April 2023. The subtext being: If it is possible to not-kill-everyone this is how we are going to have to do it. Why are we failing so badly at doing this? At this risk of looking dumb or ignorant, I feel compelled to ask: Why did this work not start 10 or 15 years ago? To be clear: I do not mean true nuts and bolts ML researcher Alignment work, which this community and MIRI were clearly the beginning and end for nearly 2 decades.  I do not even mean outreach work to adjacent experts who might conceivably help the cause. Again, here I think great effort was clearly made. I also do not mean that we should have been actively doing these things before it was culturally relevant. I am asking: Why did the Alignment community not prepare tools and plans years in advance for convincing the wider
9ed56ea8-bc88-4b72-9bd5-d5dbccaf005a
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AXRP Episode 8 - Assistance Games with Dylan Hadfield-Menell [Google Podcasts link](https://podcasts.google.com/feed/aHR0cHM6Ly9heHJwb2RjYXN0LmxpYnN5bi5jb20vcnNz/episode/NTU4OTk4MTctNGY4NS00OWRkLThkYjMtMzdlMmVmNGJmZjZi) This podcast is called AXRP, pronounced axe-urp and short for the AI X-risk Research Podcast. Here, I ([Daniel Filan](https://danielfilan.com/)) have conversations with researchers about their papers. We discuss the paper and hopefully get a sense of why it’s been written and how it might reduce the risk of artificial intelligence causing an [existential catastrophe](https://en.wikipedia.org/wiki/Global_catastrophic_risk): that is, permanently and drastically curtailing humanity’s future potential. How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first couple years of my PhD program included many long conversations with Dylan that helped shape how I view AI x-risk research, so it was great to have another one in the form of a recorded interview. **Daniel Filan:** Hello everyone. Today I’ll be talking to Dylan Hadfield-Menell. Dylan’s a graduating PhD student at UC Berkeley, advised by Anca Dragan, Pieter Abbeel and Stuart Russell. His research focuses on the value alignment problem in artificial intelligence: that is, the problem of designing algorithms that learn about and pursue the intended goal of their users, designers, and society in general. He will join the faculty of artificial intelligence and decision-making at MIT as an assistant professor this summer. Today, we’re going to be talking about his work on assistance games, and in particular, the papers [“Cooperative Inverse Reinforcement Learning”](https://arxiv.org/abs/1606.03137), co-authored with Anca Dragan, Pieter Abbeel and Stuart Russell, [“The Off-Switch Game”](https://arxiv.org/abs/1611.08219), also co-authored with Anca Dragan, Pieter Abbeel and Stuart Russell, and [“Inverse Reward Design”](https://arxiv.org/abs/1711.02827), co-authored with Smitha Milli, Pieter Abbeel, Stuart Russell and Anca Dragan. For links to these papers and other useful info, you can check the description of this podcast and you can read a transcript at [axrp.net](https://axrp.net/). Dylan, welcome to the show. **Dylan Hadfield-Menell:** Thanks so much for having me Daniel. It’s a pleasure to be here. **Daniel Filan:** I made something of an assumption in the introduction that you think about those papers under the umbrella of assistance games, is that right? **Dylan Hadfield-Menell:** Yeah, I think so. I take a pretty broad view of what assistance games can mean, and I tend to think of cooperative IRL as a formalization of a class of assistance games of which Off-Switch Game and Inverse Reward Design are examples. **Daniel Filan:** Okay. So for listeners who aren’t familiar, what is an assistance game? **Dylan Hadfield-Menell:** An assistance game is a formalization of a sequential decision making problem where we have two players within the game. There’s the human player and the AI player. Both of them have a shared goal in the sense that the AI system is meant to act on the person’s behalf and, in this case, optimize utility for the person. The big difference between the two is what information they have. In an assistance game we assume that the person has knowledge about their goal and the system has to learn about this via interactions with the person. **Daniel Filan:** What kinds of benefits do you think we get from thinking about assistance games and thinking of AI problems in terms of assistance games? **Dylan Hadfield-Menell:** I think the big thing that assistance games really provide is a way of thinking about artificial intelligence that doesn’t presuppose an obvious and measurable goal. One of the things that my work on assistance games has pointed me towards is that in AI systems, a lot of what we’re trying to do is really provide access to a broad set of intelligent computer behaviors, really. And in many ways, a lot of the improvements in artificial intelligence have really been improvements at the level of improving the ability of a specialized and highly trained group of AI practitioners, graduate students and very early technologists, to do a better job of identifying and building in complex qualitative behaviors for systems. And in the process of doing that, these practitioners developed a bunch of mathematics of optimal decision making and how to implement that with algorithms. However, sitting outside of that research, there’s always a researcher, in fact a research community, looking for how generalizable is this approach, does this capture an intelligent behavior, whatever we mean by that? **Dylan Hadfield-Menell:** What does the spec for the system look like? How should we evaluate it? And all of this is actually crucial to the eventual successes we’ve had in AI with, say, computer vision and more recent things in natural language. And what assistance games really do is they study that phenomenon at the level of abstraction where we’re trying to express goals, and they allow us to identify what’s the difficulty in expressing goals? What are the ways that things can go wrong? And what are the limits on our ability to specify arbitrarily good behaviors, where good can be interpreted quite loosely here. **Daniel Filan:** That’s interesting. This is [the AI x-risk research podcast](https://axrp.net/), and you’re, I guess, at least associated with a bunch of researchers who are interested in reducing x-risks from AI. I’m wondering, do you think that assistance games have any particular benefits when it comes to reducing existential threats from artificial intelligence? **Dylan Hadfield-Menell:** Yes, definitely. So the benefits for existential risk, I believe, come in through the extension of benefits that they have for near term concerns with AI systems. We are getting very good at increasing AI capabilities, and for a long time, this has been the bottleneck in deploying effective AI systems. And it’s been reasonable to have an individual researcher using ad hoc mechanisms to align that behavior with their qualitative goals. When I worked in motion planning earlier on in my PhD, it was my job to make sure that the tests that we designed and the experiments that we built and the simulations that I ran, and eventually the work I did with real robots was generalizable and was what I wanted, if that makes sense. And it wasn’t necessary to formally study it, because we were looking at AI systems largely under lab conditions. **Dylan Hadfield-Menell:** When you get out into the real world, the difficulty of getting the behavior exactly right, and getting the details right, often in the first go, are important. What assistance games do is they allow us to study, how do we make that choice correctly? It’s sort of like we’ve drawn a distinction between AI technology, which is an AI system, and AI research, which is sort of the thing that happens outside of the system that sets it up. **Dylan Hadfield-Menell:** And I think really this interaction whereby at this point, the average grad student can specify a really wide range of interesting visual or language or motion related tasks and have them be executed on a system. That’s really the place that we’ve gotten to for AI currently. But the thing is, if you’re doing that in a lab, it’s okay to have the experiment go wrong, to break some hardware. You get a lot of trial and error. It’s a very forgiving, quite literally experimental environment. And as we start to put systems into the real world, we need to get better at building that infrastructure around systems more effectively. And in studying that problem that I think is how assistance games can overall reduce existential risk from AI systems. **Daniel Filan:** Okay. So am I right to think that the basic story is something like, before I think of assistance games, I’m going to think of some objective that I want a really smart AI to pursue and then plug it in and press go, and then it just does something absolutely terrible or enslaves humanity, or something to achieve its objective, but after assistance games, it’s going to provide this method of analysis where I’m going to notice the problems of specifying what I want and making sure that an AI doesn’t pursue things literally and checks in with me. Is that roughly what I should think of the benefits as? **Dylan Hadfield-Menell:** Yeah. One way to think about it is, supervised learning is actually a goal specification language. It’s an effective goal specification language for, say, labeling images. And in fact that is how it gets tied down into real systems. We optimize a utility function that that defines. There are lots of properties of this goal specification language that are nice. Programming in it is something that we can distribute across a wide variety of relatively unskilled individuals, relatively cheaply, and we can actually optimize it in order to produce behaviors that empirically are good. **Dylan Hadfield-Menell:** What assistance games do is they study properties of, in a sense, this interface between humans that have a qualitative internal goal and systems that are going to be optimizing that goal. And so we can study these languages both from the standpoint of empirical capabilities, but also from the standpoint of risk and safety assessment, of robustness. I guess what I’ll say is that assistance games identify both the type of a solution we want when we’re building AI systems, and they allow us to characterize different classes of solutions, in particular, from a standpoint of brittleness and forgivingness to mistakes on the person’s behalf. **Daniel Filan:** Okay. And I guess on the flip side, if I’m a listener really interested in reducing existential risk, what should I not expect assistance games help me out with? How should I limit my ambitions for what assistance games are going to be really good at? **Dylan Hadfield-Menell:** I think assistance games are first and foremost, a analytical tool. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** So primarily what they do is allow us to formalize certain value alignment scenarios and assumptions, and study different properties of that. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** That can be done, I think - for near-term systems, it’s easy to take these formalisms and build them actually into direct AI systems, but it’s not my claim that implementing those systems directly would reduce x-risk. So taking current AI technology and merely saying, “Ah, yes, I am designing my system as the solution to an assistance game of some kind.” Will not meaningfully reduce x-risk on its own. **Daniel Filan:** Why not? **Dylan Hadfield-Menell:** Because I think you will still end up - tracking the types of uncertainty that I think you would want to do for truly advanced systems at the level where they would pose a threat, I think, is beyond the capability of current inference systems. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** So under the presumption that if you try to build an assistance game style solution into a system right now, I’m presuming that, that will consist of some form of Bayesian inference, tracking uncertainty about objectives, and maintaining those estimates in some way, combined with some form of policy optimization. And in fact, I think the types of Bayesian inference that we can do right now are not up to par with systems that would be effective enough to pose a threat. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** Does that make sense? **Daniel Filan:** Yeah, I think that makes sense. **Dylan Hadfield-Menell:** Now that’s not to say … I think those types of systems will be better for lots of other reasons. I think there are lots of benefits to leveraging uncertainty about objectives with short term systems. And I think that can reduce short-term risks. The primary thing that this can really do for long-term existential risk at the moment is identify areas of research that are valuable. So in this case, this would say we should be investing very heavily in uncertainty estimation, broadly. Specifically uncertainty estimation about utilities and utility learning. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** And I believe that analyzing these theoretical models can help point out directions for the kinds of solutions we will want to move towards with other technologies. I think this can point out things that are risks and things that are likely to fail, and it can shape rough abstract ideas of what a safe interaction with advanced systems would look like. What are desiderata we can aim for? What our targets to shoot for? But it doesn’t directly give us a recipe for how to implement those targets. **Daniel Filan:** Sure. So in that answer, you alluded to short term benefits that we could get. For … I don’t know if any listeners care at all about what happens in the next hundred years, but if they do, what are some short term, nice things that you imagine implementing solutions to assistance games could provide us? **Dylan Hadfield-Menell:** I think one of the big ones is we’re very limited with AI systems to optimizing for what we can measure. And so this means there’s a large structural bias in the kinds of systems that we build, in that they’re either designed from the ground up to measure very specific things, or they end up optimizing for shallow measures of value, because that’s the best thing that you have. Introducing assistance games into the way that we think about AI, has the ability to reduce that. So if it becomes easier to start to optimize for more qualitative goals within AI systems, that makes it easier for, say, a system designer at Facebook to make a clear goal of here are the types of content that we want to recommend as a company. It’s very complicated to specify that now, but you could imagine work in assistance games, looking at managing an interaction between a company like Facebook and their system would allow them to use much richer goals than the types of behavioral feedback that they have. And you can see inklings of them trying to move in this direction in addition to sort of other large recommender system platforms. **Dylan Hadfield-Menell:** The other side of what assistance games in short term systems would give us is that we’re building into the analysis of our systems an expectation that there will be ongoing oversight and adaptation. This is something that actually happens in practice. It’s only in academic settings that we collect our training data, hit stop, and then run on a special test set. In reality, you have your model, it’s being trained on some data stream that’s coming in and you’re adapting and adjusting in ongoing fashion to changes in your goals as well as changes in the world. If we design systems as solutions to assistance games, we will actually be building that type of a dynamic into the math at a much deeper level. And I think this can expose interfaces for systems that are just much more controllable and understandable. **Daniel Filan:** All right. So now that I guess listeners are hyped for all the great things about assistance games, let’s get into talking about them a bit more concretely. So the first paper I want to talk about is [“Cooperative Inverse Reinforcement Learning”](https://arxiv.org/abs/1606.03137), CIRL for short, for C-I-R-L. Can you tell us just first of all, what is CIRL, and is it the same thing as an assistance game? **Dylan Hadfield-Menell:** So CIRL is not the same thing as an assistance game. It is a subclass of assistance games and it is intended to be the simplest base case of an assistance game. So to that end, we built it as an extension of a Markov decision process, which is a mathematical model for planning used a lot in AI systems. In a Markov decision process, you have a state and you have actions and a reward function. What happens is you take an action in a state, this leads to a new state, and you also get a reward that corresponds to that. From the human’s perspective, everything is the same in cooperative IRL. **Dylan Hadfield-Menell:** So the human has an observed state. They can take actions, they can accomplish things in the world. What’s new is that you also have a robot player that’s a second actor in this game. So this is an addition into the MDP. You can think about it as a turn-taking scenario, where the person goes first, then the robot goes second. And from the robot’s perspective, there’s partial information. So the rewards for each state are a partially observed component, which only the person gets to see. And our goal here was to extend the Markov decision process in the minimal way to account for the fact that there are two players in the world, the human and the robot. And the robot does not know the person’s objective. **Daniel Filan:** Okay. I guess the name implies that it might be a type of inverse reinforcement learning. Can you say a little bit about what normal inverse reinforcement learning is and why we might want it to be more cooperative? **Dylan Hadfield-Menell:** In inverse reinforcement learning what you are doing is solving… well in this case, the opposite of planning really, although there’s a joke that inverse reinforcement learning is a slight misnomer. And in fact, if you go back and look up other competing branches, there’s a lot of related work under [inverse optimal control](https://asmedigitalcollection.asme.org/fluidsengineering/article-abstract/86/1/51/392203/When-Is-a-Linear-Control-System-Optimal). So what this is to say is that inverse reinforcement learning is essentially doing the opposite of reinforcement learning or planning. So if planning is the problem of taking in a Markov decision process, an MDP, and producing a sequence of actions that get high reward, in inverse reinforcement learning, you observe a sequence of actions and you try to infer the reward function that those actions optimize for. As you can see, this is kind of doing the opposite of planning. **Dylan Hadfield-Menell:** In planning, you take in a problem, a reward function, and spit out trajectory, and in inverse reinforcement learning, you take in a trajectory and spit out a reward function. The relationship this has to cooperative IRL is that in cooperative IRL, you are solving, in fact, the same, or at least a very similar problem, which is to say that you see a sequence of actions from the person and from those actions, you need to infer the reward function that generated them. Now, if there’s this deep similarity, why do you need something new? Why can’t we just use inverse reinforcement learning in order to solve assistance games? The reason is because there are a couple of small changes that inverse reinforcement learning doesn’t quite capture, that can be important to represent. One of them, which is kind of a minor tweak is that inverse reinforcement learning is typically formalized as learning the reward function from the person’s perspective. **Dylan Hadfield-Menell:** So in the standard setup, if you look at a person go, and maybe watch someone go through their morning routine, so they get up, they wash their face, they go get a coffee and so on. If you infer a reward function with inverse reinforcement learning and then naively optimize it, what you end up with is a robot that washes the robot’s face and then tries to drink coffee for the robot. In a sense it can naively be formulated as imitation learning, which is only actually one type of solution to an assistance game. And we want to allow for that broader class. So what cooperative IRL does is says, “Okay, no, there are actually two agents in the environment. There’s the person and the robot.” If you’re doing cooperative IRL right, what should happen is that you observe the person’s morning routine, and then the next day, while they’re washing their face, you’re making coffee so that it’s ready for them. **Dylan Hadfield-Menell:** In this way, you’re not imitating their objective as you would in standard inverse reinforcement learning. This is kind of a minor tweak. You wouldn’t lose sleep over this issue, but it points towards actually a much deeper issue with inverse reinforcement learning, which is that inverse reinforcement learning assumes that the behavior is generated without the person knowing that the robot is watching them. So it assumes that your immediate rewards, the behaviors you’re taking are done purely in service of the rewards that you get within those behaviors. And I think if you think about how you would behave, if you knew that your new robot toy was watching you, I think you can at least agree that you would probably do something different. And in fact, in lots of scenarios, people will adapt their behavior when someone is watching them in order to be more informative, or to in general, try to better accomplish their goals. **Dylan Hadfield-Menell:** And this type of a dynamic is actually crucial for the kinds of problems that we’re worried about with assistance games, or are worried about in an existential risk context with assistance games. The reason why it’s crucial is that as we are building representations of our goals for future increasingly advanced systems to optimize. We are the humans in this assistance game, right? It’s not actually cooperative IRL. It’s got multiple people, non-stationary rewards, partial information up the wazoo, all kinds of things. But ultimately we want to learn how to solve that game well for us. And we are certainly adapting our behavior to what the system will do eventually. And it’s this incentive that I think actually drives a lot of alignment. So inverse reinforcement learning is basically a type of imitation learning. For more advanced systems we don’t expect imitation learning to truly be an effective solution. **Dylan Hadfield-Menell:** And so what this means is we’re going to be relying on a communication solution to an assistance game at some level. And when you study those communication mechanisms and value alignment, or assistance games, they actually only make sense if the person is taking into account the impact of their actions on the future behavior of the AI system. If you look at natural language, for example, as a way to describe your goals, under inverse reinforcement learning, you can’t actually describe your goals with natural language. You can just show the system how you would like it to talk. **Daniel Filan:** So somehow cooperative IRL is like providing an analytical framework for understanding communication between humans and AI systems. And we think that that kind of communication is going to be really important. **Dylan Hadfield-Menell:** And specifically it allows us to formalize the limits of intentional communication of our goals. **Daniel Filan:** Okay. What are the limits of intentional communication of our goals? I didn’t get this from the paper. **Dylan Hadfield-Menell:** Oh, I don’t know that this is necessarily in the paper. I think we’re now talking - this is probably the class of things which are motivation for the paper, that is not the kind of thing that you could get through peer review in 2015. Sorry, could you repeat the question? **Daniel Filan:** We were talking about communication in co-operative inverse reinforcement learning. **Dylan Hadfield-Menell:** Oh, yes. Intentional communication of objectives and … oh, you were talking about what are the limits. **Daniel Filan:** The limits, the limits of intentional communication. Oh yes. **Dylan Hadfield-Menell:** I mean, we are trying to analyze that type of intentional communication. Intentional communication, there we go. Yes. So naturally they come in through cognitive limits. Is probably one of the biggest ones, so … **Daniel Filan:** On the human part or the robot part? **Dylan Hadfield-Menell:** On the human part. So, when we study assistance games, a solution, where “solution” is in air quotes, because it’s not really the answer to how do we play an assistance game, but what assistance games really let you do is say if the person plays strategy X and the robot plays strategy Y, how well did they do based off some assumptions about what the environment looks like, and what the types of goals are present. And if you’re thinking about building an AI system that can integrate well, and be robust to lots of different people and produce with very, very high likelihood, at least in improvement in utility, one of the things you really have to compensate for is the fact that the human policy is not arbitrary or controllable. It is limited by our biology and our training and upbringings. **Dylan Hadfield-Menell:** This means that there’s only so much information we can provide about what we want our system to do at any given point. And given that in principle, when you specify a system, you are actually specifying a mapping from all possible observations that system could get, to the correct action to take in response. And to do that, you have to at least think about all of those possibilities in some way. And the fact is that we can’t, and we have cognitive limits on how many options we can consider, and even what types of options we can consider in a way. I cannot imagine truly strange unfamiliar situations like that. **Dylan Hadfield-Menell:** I have to perhaps experience them, or think hard about experiencing them, or have analogies that I can draw upon them. And all of these are limits on the amount of information I can provide about my goals. Perhaps another way to explain this, and perhaps this is maybe the central way in which assistance games can help reduce existential risk is they allow us to identify that there are really two types of information that play into what a system is going to do. At least in the current paradigm for AI systems. There is both what I will call objective information, which is information about how the world will unfold, perhaps in response to different actions you might take. And that’s separate from normative information, which is information about how to value those different possibilities, and what assistance games do primarily, is they’re a decision problem where there is a formal representation of a limited normative channel. And I think the limits on that normative channel, need to be accounted for and balanced against certain types of risk. And that risk could be how quickly you change the world, or - in a sense, you need to regularize certain aspects of your behavior by the amount of information that you have about goals. **Daniel Filan:** Okay. So speaking of the information about goals. So I think when people in the AI alignment space are thinking about learning normativity, they’re sometimes thinking of different things. So, one thing they could be thinking about is learning the specification of a kind of specific task. For instance, I want to build a really good CPU or something, where it’s complicated, but ultimately it’s about one CPU as opposed to getting an AI that just learns everything I care about in my life. And it’s just going to make the world perfect for me. Which of these do you think CIRL is aimed at analyzing or could it be equally good at both? **Dylan Hadfield-Menell:** Actually, I think it’s both. **Daniel Filan:** All right. **Dylan Hadfield-Menell:** So these are very different in practice, right? The things that I need to specify for a utility function to be good for getting a CPU, to do things that I want. Actually quite non-trivial and might bring in to bear complicated… If we wanted to, I can tell a story where you need to optimize CPUs to account for environmental possibilities because they’re going to be used in Bitcoin mining. I’ll say things can get more complicated than a simple first-pass analysis might suggest. **Daniel Filan:** I do think people don’t use CPUs much in Bitcoin mining anymore. **Dylan Hadfield-Menell:** Okay. Yes, that is a very good point. **Daniel Filan:** Alas, but they are used for, I think for other cryptocurrencies. CPUs are still on the cutting edge of mining. **Dylan Hadfield-Menell:** Yes. I more meant to say that actually one of the issues, one of the sort of structural issues in building AI systems is the presumption of narrowness based off of relatively simple analysis. And I would say that even CPU design can get far more complicated than even graduate-level education would lead you to imagine. But putting that aside, the main difference for both of these is really not what the person’s goals are. It has much more to do with what the robot is capable of doing and what it’s likely to be doing, and what environment it is likely to be in. So, in cooperative IRL, you would formalize these differences not by a change in the person’s reward function within the model. You would formalize these via a change in the environment that the robot is acting in and maybe some small things about the prior structure, right? **Dylan Hadfield-Menell:** So, the “learn about everything” is the simpler model because you have just a really broad set of environments the system could be acting in. And so through interactions with the person, you’re going to focus on trying to reduce uncertainty about objectives, overall. And there are some interesting ideas on this. If people want to look at that kind of idea more, I would go look at [Justin Fu’s work on adversarial IRL](https://scholar.google.com/citations?user=T9To2C0AAAAJ&hl=en) which captures some of these ideas for learning generalizable reward functions. Now, let’s talk about learning a much more narrow task and what that would look like in an assistance game. In this case, it’s probably modeled by a robot that has an action set that’s primarily related to things you can do for computer circuit design, for CPU design. A prior over reward functions that the person cares a lot about CPUs in some way. **Dylan Hadfield-Menell:** So maybe there are 10 different metrics that we can say are relevant for CPU use, or we can identify some grouping of features of the world that are relevant to that environment. And say that there’s a high probability that those features matter in the utility function. And then I think the optimal thing for the robot to do in this case will be to learn about this very specific task. And it won’t - even though in principle, it could be learning about the person’s norms and kind of everything about preferences. It’s really the structure of where it will be deployed that leads to narrow learning. And I think one of the things that’s nice about assistance games is that that solution falls out from the definition of this game as the correct thing to do. Rather than a sort of more ad hoc reasoning of, this is a narrow task versus not. This is just that because the system’s incentives come from doing things that are useful. Its information gathering behavior is targeted to relevant information based on its capabilities. **Daniel Filan:** So yeah, this gets into a few questions I have about in particular the CIRL analysis. So in the CIRL analysis, part of the setup is that there is a prior over human forward functions, or utility functions. Yeah, you mentioned this a little bit. But how should I think about what the right prior is? Because, that presumably, that’s going to pretty heavily influence the analysis. **Dylan Hadfield-Menell:** Yes, absolutely. **Daniel Filan:** Oh, and I should say that by prior, I mean a prior probability distribution. So, your distribution over what the human’s reward function is before you know anything. **Dylan Hadfield-Menell:** Right. So I think it depends on what type of a situation you’re analyzing and what you’re looking for. I think if you’re in a setting where you’re providing a product to a large population or user base or something like this, then you would think about that prior as being something that you’d fit in a more data-driven way. If you’re doing analysis of looking at worst-case analysis against AI systems, then I would say you probably want to have as broad of a set of possible utilities as you could have, and basically, look for systems that are effective against that overall slate. I’m interested in trying to think about what interesting and general priors would be and kind of the right - yeah, how you could look at developing priors in a more rigorous way. **Dylan Hadfield-Menell:** I would say that this is something that’s missing from a lot of the current analysis on assistance games. There are sort of practical questions on if you’re using assistance games or cooperative AIRL as a design template for systems, how could you learn priors? I think that’s a bit more clear. For analysis, for things going forward. You’d like to talk about how priors are shaped by environments. So [Rohin](https://rohinshah.com/)’s [paper](https://arxiv.org/abs/1902.04198), where he looked at identifying prior information about reward functions based on the assumption that the world state, when the system was turned on, was already optimized captures some of these ideas. And I think it’s really interesting to explore that direction whereby we can try to think about what types of - given certain sets of evolutionary pressures, what types of reward functions are we likely to see? And I think that’s a really complicated - that’s probably a couple of PhD theses worth of research, if not more. But that’s a bit of how I think about this prior question. **Dylan Hadfield-Menell:** If I can also add on top of that, there’s sort of a question of what is your prior? What is the distribution over the set of reward functions you’re considering? I’ve been thinking a lot recently about the ways that actually, part of the question is actually just determining a good version of that set. And very specifically within AI systems, a lot of failure modes that we think about in theory, for x-risk scenarios, and that I would argue we have observed in practice, stem from missing features in particular. And so, a lot of the question of how do you come up with the right prior is partially about how do you come up with the right features, is sort of the refinement of that question that I think is a bit more, I don’t want to say more interesting. But it feels more specific at least. **Daniel Filan:** Yeah. It’s very - because normally in Bayesian inference, right, you can recover from your prior being off by a factor of two, but you can’t really recover from the true hypothesis not being the support. Defining the support is where it’s at. **Dylan Hadfield-Menell:** Yeah. Missing features - and in practice that that means missing features for the system. If we go back to the Facebook situation, right? It’s sort of like, they have features for how much you’re engaging with the website, but they don’t have features for how much you regret engaging with it. And there are ways that they could try to identify that. But there’s actually a whole process for, if you think about the sequence whereby you would deploy a system and then integrate that feature in, that’s really the gap that our current systems fail at, in a big way. It’s both, you absolutely are going to run into unintended consequences, and it takes a long time to discover those unintended consequences. And it takes a long time to integrate proxies for or measurements of those consequences into the system. Whether in practice that happens at an organizational level or at a direct, rewrite its objective kind of level. And so, value alignment problems and assistance games where you look at mechanisms for identifying new qualitative features of utility are something I’ve been thinking a lot about recently. **Daniel Filan:** Am I right? That you have a paper that you co-authored very recently about this? **Dylan Hadfield-Menell:** Yes. So that was, we looked at recommender systems specifically, it was called - the subtitle was “Aligning Recommender Systems with Human Values”. Yes, we called it, [“What Are You Optimizing For? Aligning Recommender Systems with Human Values”](https://participatoryml.github.io/papers/2020/42.pdf). And what this presented was an alignment perspective on recommender systems and did our best to document what actually is quite interesting, as the existing public information about attempts companies have taken to better align their systems with underserved goals. So, it turns out that companies are making these changes and they are doing some of these things. And I think the value of assistance games is giving us a category to identify these types of interventions and move them from features of practice to objects of study. **Daniel Filan:** So going back to the CIRL formalism. Part of the formalism, right, is that there’re some parameter specifying, what reward function, out of all the possible functions the human actually has. And at the start of the game, the human observes this. I’m wondering how realistic you think this assumption is and how, if it is realistic then great. And if it’s not realistic, then how bad you think it is? **Dylan Hadfield-Menell:** Like all models, it’s wrong but perhaps useful. I think it depends on what you imagine that theta to capture. Where theta is the variable that we represent the reward function with. I think that there is a way to set up this type of analysis where that theta represents kind of a welfare function sort of in the sense of a moral philosophy kind of sense. In that case, the connection between that and human behavior might be incredibly complicated. But we could imagine that there is this static component that describes, what we really would want our system to do in general overall. And I think that’s a reasonable use of the model. **Dylan Hadfield-Menell:** On the other hand, that leads to a lot of complexity in the policy. And perhaps the statement that the human observes theta at the start is no longer a reasonable one. And so, one of the assumptions in cooperative IRL that we’ve relaxed is looking at this particular one where the fact that the person has complete knowledge of the objective, seems perhaps fishy. Especially if it’s a static, unchanging objective. The other way to see it is theta’s sort of more how you’re feeling a bit day to day, in which case the staticness assumption is probably, well, is just wrong. But the behavioral assumptions that we typically make are a bit more reasonable. **Daniel Filan:** And if people are interested in reading that follow-up work. What kinds of papers should they read? **Dylan Hadfield-Menell:** So that is referring to a paper that Lawrence Chan was the first author on. That’s called [the Assistive Multi-Armed Bandit](https://arxiv.org/abs/1901.08654). And in this case, we look at an assistance game where the person doesn’t observe theta at the start of the game. But in fact, they learn about it over time through reward signals. So the kind of simple idea of it is in cooperative IRL, let’s say you might be making someone tea or coffee. You would see them choose which one they want and then in the future know which one to make for them. But if there’s someone who’s an alien, that’s just come down from outer space or someone from a society that doesn’t have tea or coffee or a child or something like that, that would be the wrong kind of solution. And it would be very bad to assume that the thing the person chose is actually what they want. Because we know that they have to learn and experience before their behavior is indicative of their goals. **Dylan Hadfield-Menell:** And so the assistive multi-armed bandit formalizes that kind of a scenario where solutions look like people try out several options, learn what they like, and then the system eventually learns from that. And there’s some really interesting things we identify about ways that the system can actually help you learn before learning what you want. Because actually at the start of the game, the system and the robot have similar information but different computation abilities. **Daniel Filan:** And interestingly enough, that set-up does seem closer to the inverse of reinforcement learning where you’re learning how to optimize a reward. **Dylan Hadfield-Menell:** Yes, actually. Yes. You can still have - to truly do inverse reinforcement learning, you don’t want the cooperative scenario, but yes. It is certainly - the inference problem we are solving is in fact, real inverse reinforcement learning, if I could be so bold. And if you’re listening to this podcast, Stuart, I’m sorry. **Daniel Filan:** Yeah, maybe I should say that in every episode. Stuart Russell is Dylan’s and mine advisor. So, going back to the cooperative inverse reinforcement learning, so it’s a game, right? Games, they have equilibria like Nash equilibria, or maybe other types of equilibria. Should we expect - in our analysis, should we focus on the equilibria of these games and expect humans to land in one of those? Or should we fix a relatively simple human policy or, yeah. How should I think about equilibrium analysis in CIRL? **Dylan Hadfield-Menell:** Carefully. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** I think the primary value it has is in identifying limits on communication ability for normative information. So if you look at a model of a cooperative IRL game and you compute the optimal human-robot configuration, and there are still aspects of the human’s objective that the system doesn’t know, this gives you information about what types of value alignment problems cannot be solved. Where you need to have more interaction or more information before deploying to that kind of setting. **Dylan Hadfield-Menell:** I think - there are practical short-term applications where the equilibrium assumptions I think are valid. Well, actually, I don’t want to say totally valid, but where people do develop towards best responses for the systems that they’re using. So, I know some people for recommender systems actually think fairly intentionally about, I don’t want to click on this because if I do, the system will show me more things like that. For me, as someone who’s built a lot of robots and trained them to do things, I can provide information that’s better in some ways. Just because I sort of know what types of mistakes the inference is likely to make. So I think there’s limited value in this. And in general and understandings, what are the ways that people might adapt to exploit components of your learning algorithm? Ideally in positive ways, but once you have non-cooperative settings that also starts to be relevant. **Daniel Filan:** If you just think about the original CIRL game, if you have a wide enough action space, that includes the human being able to type on a keyboard, it seems at least one of the equilibria is going to be the human just writes down their reward function. And then sits back, and the robot, does everything from there on, right? Which is kind of scary. **Dylan Hadfield-Menell:** Yes. Well, there are different types of - that is an example of what I would call a communication equilibrium, right? Where you are encoding your preferences into a language or into symbols where all of those symbols are unit cost from your perspective. And what this means is that you can have incredibly high normative bandwidth actually, right? Your limits on your ability to communicate your preferences are like the [Huffman coding](https://en.wikipedia.org/wiki/Huffman_coding), given the appropriate prior or what have you. But at the same time, it’s an incredibly brittle solution. So, if in a communication setting you have your encoder in this case, the person’s functioning as the encoder taking preferences and encoding them into actions, in this case, these symbols, and then you have a decoder which is the robot, and it’s taking these symbols and below it, reinflating them back into objectives and rewards. **Dylan Hadfield-Menell:** And if those are mismatched, you have no bounds on the performance of the resulting robot policy. And in fact, there are small changes in many encoding schemes that would lead to the system actually minimizing utility or something like that. One way to think about this is inverse - So we talked about equilibria, right? I think I want to step back from equilibria and say that the things that are interesting, is different types of strategy pairs. They may or may not be in equilibrium. Now, let’s say we’re comparing two different equilibria, and these are two, sorry, two different strategy pairs. And one is the strategy pair that inverse reinforcement learning, that imitation learning corresponds to, which is, person does a thing and then the robot imitates. You can still communicate a lot of information about goals there. There are some limits in how much you can communicate, but it’s also an incredibly robust communication mechanism, right? **Dylan Hadfield-Menell:** Where it relies on the assumption that people can do things like what they want, which is not a crazy assumption about people. And explicitly we accept that they may make mistakes along the way there. And to the extent that they make mistakes, those mistakes can be related to how bad the things are. And so this gives you, at least in theory, some bounds on how bad you can be when you’re optimizing for the objective that you learned through that type of a mechanism. On the other hand, if we look at these types of more communication equilibria, where let’s say, the robot is observing - let’s say, we’re doing 2D navigation. So, you’re moving around. You’re in a maze of some kind. Certain maze squares are good. Certain maze squares are bad. And the robot’s going to observe the person take some actions, and then is going to act in a similar maze. **Dylan Hadfield-Menell:** The imitation learning style version of this problem is very simple to explain, right? Person just says, “Okay, let me figure out how to solve the maze. Do my best.” The robot will try to copy that. And if we’re doing IRL with the right kind of prior and things like that, the robot can actually learn “Here are some features of what the person was seeking out” and perhaps improve on the person’s performance. Now, another class of solutions that could happen here, are to assume first off, let’s assume the person doesn’t care about the training scenario. So, the person’s going to be acting in this maze. But the only thing that they - the maze actually, all the actions are the same cost. And in many cases, this is true, right? The person is taking actions for the purpose of generating information that the system can then use to go do useful things with. **Dylan Hadfield-Menell:** So another type of solution is to come up with an encoding where you just discretize the reward space and assign a reward to each square. And what the person does is just, moves to that square on each episode. That will have limits based off, of how good you can discretize the space but you can continue to do this where the person on successive episodes, rather than trying to treat this like a maze, just forgets that it’s a maze and just says, “Ah, I’m just going to use this to come up with an encoding of my reward function.” **Daniel Filan:** Right. Sorry. Should I be thinking about, on the first time-step they go to the best square, and on the second time-step, they go to the second best square? How’s this encoding working? **Dylan Hadfield-Menell:** So, the point is that the encoding can be arbitrary. **Daniel Filan:** I should imagine something like that. **Dylan Hadfield-Menell:** So it’s, yeah. And I guess the fact that there are lots of parallel solutions here, right? Say there’s only a maze with three squares. I could then, I start at one place. I can stay here. I can go, right. Or I can go left. I can use that to encode with my actions, arbitrarily complicated things. **Daniel Filan:** Just in binary. **Dylan Hadfield-Menell:** Just in binary. Exactly. But, there are lots and lots of different ways to encode my preferences into binary. And so now, I’ve taken a preference specification problem and turned it into a preference specification specification problem. **Daniel Filan:** Nice. Or terrible. **Dylan Hadfield-Menell:** And it’s not clear that I am much better off. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** From a risky standpoint. On the other side, if you do align with the system on this communication strategy, you can provide far more information. **Daniel Filan:** Yup. **Dylan Hadfield-Menell:** Right, you can provide effectively arbitrary information. The only limit is that you be able to encode it somehow. And so I think this is kind of a really big tension between robustness, how much can the system adapt to our behavior? And that’s based off of what types of assumptions about the details of our behavior is it making. Versus at another spectrum, there are these kind of purely communication settings where the encoding scheme you choose is kind of arbitrary from some sense. And this means you can get incredibly informative codes, but brittle ones perhaps. **Daniel Filan:** Okay. So, now, I’d like to move to the next paper. So this is [The Off-Switch Game](https://arxiv.org/abs/1611.08219). Can you tell us, can you summarize this paper, first of all? **Dylan Hadfield-Menell:** Sure. The idea behind this paper, it really came from two places. One, we were inspired by [Soares et al’s paper on corrigibility](https://openreview.net/forum?id=H1bIT1buWH). And we were looking to translate some of those ideas into an assistance game style framework. And the other side of it was that at this point, my advisor, Stuart Russell had been going around and talking about existential risk within the AI community, a fair amount. And he was discovering that this point about the ability to turn off a system was coming up with some frequency in his discussions. And what felt to us like a relatively clear observation about incentives was not being recognized. **Dylan Hadfield-Menell:** And so part of the idea behind the paper was to show, sort of very clearly, that once you are given a task that creates incentives to accomplish the task, and that will include accomplishing other things that enable maintaining the ability to accomplish that task. Which is relatively simple to say in English, relatively intuitive. And frankly not that hard to show with math. And the Off-Switch Game then is a cooperative IRL formulation of an interaction where there’s, as before, a human and a robot and the robot starts off. **Dylan Hadfield-Menell:** We assume that it already has prior information about what the utility function is. The robot makes a choice between one of three different actions. It has a candidate action that it can execute. So we can go ahead and directly execute it. It can choose, “no, that action’s a bad idea” and just turn off or it can say to the person, “Hey, I’d like to take this action. Do you think that’s a good idea?” And then the person can take a binary action, which is to either say yes, and then allow the action to execute or to say no, and turn the robot off. And we analyze this under scenarios where the robot assumes that it knows the person’s objective and scenarios where it has uncertainty about that. **Daniel Filan:** Okay. I think a lot of people who are thinking about existential risk from AI for the first time, think “okay, why doesn’t the human just turn it off? Why does the robot need to present an option?” **Dylan Hadfield-Menell:** Because our goal here is to understand the system’s incentive to bypass oversight ultimately. And so the setup for the game is such that the system has the ability to bypass the human entirely, and so we wanted to model that component. And so that’s why you have this a little bit weird thing of the robot choosing to allow the person to turn it off. **Daniel Filan:** Okay. And I guess this models in real life, if I’m really smart and I don’t want you to turn me off, I imagine I can keep you away from the button or something? **Dylan Hadfield-Menell:** Yes, it captures that. I would say it’s - the way that I like to think about this really is we want to understand what is the incentive for oversight. And so in order to do that, you have to have an option for no oversight, and the model attempts to do that in the simplest way we could think of. **Daniel Filan:** All right. I think we haven’t said it yet, when do robots have incentives to allow themselves to be subject to oversight? **Dylan Hadfield-Menell:** When you have two factors at play, one is uncertainty about how to evaluate its actions. So, in this case, uncertainty about the utility evaluations for states, combined with the knowledge that the person’s behavior reveals information about that uncertainty. So the key thing in this model that creates an incentive for oversight is that the off event is correlated with the action that you were about to take being bad, or at least worse than turning off. And it’s the fact that you don’t know whether that action is good combined with this correlation that creates the incentive to give the person this capability. **Daniel Filan:** All right. So one question I have about this paper is it uses an assistance game style analysis, but it’s very lightweight, right? It’s only for one round, you don’t have to do… The CIRL paper has much more math than this paper. In general, how often do you think these kind of pared down versions of assistance games are going to be useful for analysis and how often do you think we’re going to need the full machinery of assistance games? **Dylan Hadfield-Menell:** I think they’re both really useful actually. So I think that they just serve different purposes. So this off switch game analysis is really good for, very abstractly when we’re designing systems, what are the features that I should be looking to? What should I be paying attention to? But as you add complications to the model there, you can also start to ask more nuanced questions. And so the off switch game is a one-round interaction. [With Smitha Milli, I came back and looked at an extension of this, that we called the obedience game](https://arxiv.org/abs/1705.09990), where it’s effectively a multi-round off switch game. And, in this case, well, what we ended up showing there is actually some of the issues with missing features in systems, but it allows us to identify the dynamics of this learning environment, which are that over time, uncertainty about the objective goes down. **Dylan Hadfield-Menell:** And so that leads to actually some pretty structured dynamics in how you expect things to behave early on versus later on. And then if you know you’re actually being involved in this game, you actually might start to take different actions early on to communicate more information for later rounds. And all of these I think are interesting facets that you want to be able to complicate the model towards, in effect. So I think these really simple, short, clear at the level where you could explain it to your non-technical friends, like that’s where the off switch game has a lot of power. It’s also really useful to be able to add other things to that model, in particular, to look at the sequential dynamics and how things change over time. **Daniel Filan:** All right. So again, talking about the sort of the structure of the paper or something, it seems part of why you wrote it was to convince a bunch of people of a point. Did it work? **Dylan Hadfield-Menell:** Not really, but I think not because the paper didn’t make an effective case, but more so because the people that we were arguing with moved on to other focus. So in 2015, AI safety and concerns about existential risks were very poorly known within AI research circles and over time, and arguably as a result of this paper and things like it, that has changed to the point where this isn’t as much of a taboo subject. And at the same time, I think a lot of the people we were arguing with were motivated by desires to maintain research funding by avoiding another AI research winter, which is something where a lot of AI funding got cut because we couldn’t live up to the hype from the ’80s. **Dylan Hadfield-Menell:** And other folks who are generally concerned with maintaining the ability to design and build the AI systems that they’re excited about and driving this research forward. And if those are your goals, you aren’t focusing on the x-risk community these days, because there are a lot of other people pointing out the current problems with these systems. And I think that tends to be the focus of those folks for the most part. **Daniel Filan:** Okay. So we’ve kind of given, I feel comfortable saying our side of that discourse. If people are interested in what it’s from the other side, which people are we talking about that our viewers can try and hear from them their take? **Dylan Hadfield-Menell:** Yes. I think you could look at - Rodney Brooks wrote [a long blog post in response to the off switch game in effect](https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/). And that might be a good place to look at for a sort of more well-thought-out critique. And I think the other place to look at it would be [Stuart](http://people.eecs.berkeley.edu/~russell/) was involved in several debates back in 2014, 2015. And I think the best thing to do would be to try to find those, although, I don’t have any references for them. **Daniel Filan:** We’ll try to link at least the blog post in the episode description and in the transcript. Let’s get back to the paper itself. At the outset, we sort of talked about assistance games as an analysis framework, right? But the paper has a section where it talks about manually modifying the robot’s uncertainty over what the human reward function is basically because the more uncertain the robot is, the more it’s going to defer to humans. That seems kind of in tention with the pure analysis frame, at least to me, I’m wondering how does that tension resolve? **Dylan Hadfield-Menell:** Right. I think that was a bit of foreshadowing in a way for where we were going to be heading with [the obedience paper](https://arxiv.org/abs/1705.09990) that we did next. But I think the way that I was looking at it was really trying to bring in some degree of strategic uncertainty. So within that analysis, we assume that the robot kind of knew beta for what the person was doing. In this case, beta was the parameter of how rational is the person. We had it set so that I believe beta equals zero corresponded to a rational person, and as it increased, the person would make more and more errors. And all of our results where we analyzed this trade-off between beta effectively and the robot’s uncertainty about the utility, what we identified is that there’s a trade off between those which determines ultimately whether or not the robot chooses to interact with the person or take its best guess and sort of see that out. **Dylan Hadfield-Menell:** And so what we wanted to look at was a scenario where you’re imagining building a system that is going to interact with someone. And as the designer from this perspective or you could think about this from the robot’s perspective, in that case, you don’t really know what beta is. And so what we were doing was looking at, “Okay, well, if I have to guess what beta is, is it better to guess too smart or too dumb? And what are the different types of errors that you identify from that?” And I believe what we showed is that if there are - in effect, if you run this and guess many times, if you tend to overestimate beta, think the person is dumber than they are, you lose utility, but you do better from a worst case standpoint to overestimating it and thinking the person is smarter than they are. Or do I have that wrong right now? I’ll be honest, I’d have to go back and re-look at the section to go and see- **Daniel Filan:** The noisier, the human is the less likely you are to defer, so I think that means- **Dylan Hadfield-Menell:** Right. So it’s better to assume the person is smarter than they are. Well, it’s- **Daniel Filan:** Safer somehow. **Dylan Hadfield-Menell:** It’s non-trivial how to do - you can’t work it out super easily because what’s going on is there are scenarios where you should actually not give the person oversight and there are scenarios where you shouldn’t. And if you think the person is more capable than they are then you can make mistakes that will lead to a higher cost for them if they screw up by giving control. **Daniel Filan:** Perhaps this is motivation for listeners to read the paper and resolve this mystery. **Dylan Hadfield-Menell:** Yes, very much so. **Daniel Filan:** So kind of related to that, I think one thing that the paper suggests is a model of what you might call calculated deference. So you have an AI system and if it implemented a solver for this game, what it would kind of do is say, “Okay, the human’s just trying to shut me off, how much do I believe that this is really informative versus how sure am I that the thing I want to do is really the right thing to do?” And to me at least, there’s something kind of - I get a bit nervous about that, right? It seems it might be kind of brittle to an AI system being wrong about what I want or about how my actions are related to my preferences or something. **Daniel Filan:** And I get worried about mistakes that can’t be undone and I’d hope instead for some kind of uncalculated deference where the robot just does what I want even if it thinks it’s a bad idea, but somehow I’m not exactly sure how is otherwise rational and reasons well. I’m wondering if you have comments about this difference in what your analysis might say about less calculated deference? **Dylan Hadfield-Menell:** Yeah. So I think if we pop back up from off switch game to the more general cooperative IRL perspective, off switch characterizes a class of solutions to cooperative IRL games. At one point, I tried to be somewhat formal about this, but there’s a clearly identified signal in the environment and there are some properties of the robot behavior and the human behavior such that you have the ability to access it at all times and there’s a certain robot behavior that follows on from sending the signal. So the question that cooperative IRL would allow us to look at is based off certain assumptions about the environment, the space of possible utility functions and the limits on human behavior, is it a good idea or not to implement a strict off switch solution, a soft off switch solution or a no off switch solution. **Dylan Hadfield-Menell:** And I think that different scenarios will call for different settings of that in general. So to talk sort of more about this idea in particular for this type of calculated deference versus uncalculated deference. Part of what I’m saying is you should probably include a link to [the obedience game paper](https://arxiv.org/abs/1705.09990) in the description as well, because I think that paper was actually intended to get at this question specifically. So what we did in that paper was we looked at effectively a repeated off switch game where what happens within each round is the person sees a set of, in this case, we had 10 different actions and tells the robot which action they would like it to take. And now the robot observes this and can do any action that it wants to. And what we can compare is the performance of a purely obedient policy which would be your uncalculated deference with calculated deference. **Dylan Hadfield-Menell:** And if you set this up, we just assume some prior distributions over different features for actions and weights for utility functions, nothing very fancy. And you see a fairly predictable result, which is according to this model, calculated deference is better than uncalculated deference because you have more utility. And that, I think if you understand the mathematics of these problems, very, very clear why that happens, right? It’s you’re comparing the optimal policy with something that is possibly not the optimal policy. And in fact, if you assume that the person makes mistakes, it is guaranteed to not be the optimal policy. So you look at this, and as researchers, we looked at this and [inaudible 01:12:52] a bit of attention, right? Off switch or the ability to turn a system off is a specification of desired behavior, right? And we seem to think that we want that. **Dylan Hadfield-Menell:** And there’s a real question about why would you want that? And from this analysis, it seems you certainly wouldn’t. And I can definitely create scenarios like a self-driving car where you don’t want it to wait for interaction because your effective beta as you page back in and context switch and start figuring out how to drive a car will cause you to be worse than whatever its best guess would be. So we wanted to try to understand this and formalize a bit of where this intuitive notion comes from. And to do that, we investigated this game from the context of missing features. And what we showed is that in the presence of missing features where you have components of your… So to be a bit more specific, we looked at variants of this problem where at one end of the spectrum, we hid almost all of the features that determine utility from the system. And at the other end of the spectrum, we introduced a bunch of distractor features. **Dylan Hadfield-Menell:** And what we showed was that if you’re missing features, this optimization strategy goes haywire. If you’re missing enough, it can do crazy things like disobey the person on the first around. Because you might pick - from the robot’s perspective, you might select a dominated action. And in general, you sort of never really recover from this optimization strategy. If you have distractor features, it’s fine because they’re noise and you eventually rule them out. But if you’re missing features and you try to optimize, you can end up just being confidently wrong. And so the way that I have come to understand this calculated deference versus uncalculated deference, the difference sort of comes down to how good a job you think you’ve done about identifying the relevant features for this decision. **Dylan Hadfield-Menell:** And if you think you’ve done a good job, presumably the self-driving car scenario roughly fits this, then calculated deference is what you want. And if you haven’t done a good job or if you’re in a scenario where you’re much less clear then you want to, at least for a long initial period, have that kind of just general deference as the optimal strategy. And this falls out as the optimal strategy, right? Because if you want to do inference in a world where the system can learn about the missing features, that means it’s hypothesis space of possible reward functions is necessarily much larger and so you have to provide more information in order for the system to be useful. **Daniel Filan:** It seems like this suggests a problem, suppose I’m an AI and I have this a human overseer, and sometimes the human overseer does things that seem irrational, right? I can’t make heads or tails of how this could possibly be rational. One way this kind of thing can happen is that the human just is kind of irrational. Another way this could happen is that I’m the one who’s irrational, and the human knows some things, some features of the environment or something, that I don’t know. And therefore the human is taking what looks like a dominated option, but actually it’s really good on this axis that I just can’t observe and that’s why it’s trading off these things that I can tell are clearly good. Is there some way - it seems you would ideally want to be able to distinguish these two situations. **Dylan Hadfield-Menell:** Yeah, you would like to be able to distinguish those situations, and I think- **Daniel Filan:** Is it possible though? **Dylan Hadfield-Menell:** At least in some cases, no, it comes down to kind of your joint prior over possible utility functions for the person and possible meta strategies where a strategy maps the person’s preferences into their behavior. And what you’re sort of saying is, well, I’m seeing a person behave and my best estimate of a utility function describes that as bad, so am I missing something? And well, it’s possible you’re missing something in the sense of there are observable properties of the world, which have mutual information with the person’s future actions and you could learn that certainly, but whether those are rewards or not is fundamentally kind of arbitrary, at least, is arbitrary at the level of abstraction that we’re talking about. **Daniel Filan:** Yeah, if you could only determine behavior, you can kind of pack things into policies or reward functions, and… **Dylan Hadfield-Menell:** Exactly. And this makes identifying when they’re… I think in general, this calls back to that point that I made about the difference between objective information about the world, objective, in the sense of sort of true and identifiable. Let’s say there’s some certain pattern of brain spikes, neuron spikes that really predicts my behavior in the future, are those spikes - the causal relationship could be determined. But whether those are good or bad, or whether those are evidence of me thinking about the consequences of my actions and then planning a sequence of things in order to accomplish some internal representation of, I don’t know, maybe those spikes are anticipation of ice cream, and so they are representation of my goal, at least locally in that sense. Or it could be that those spikes are just the trigger for some tic that I have, that I don’t really enjoy and I don’t have the ability to stop. And from an observational standpoint, unless you make some type of normative assumptions, which is to say, unless you bring some additional source of normative information to bear, distinguishing those two based on observations, isn’t really possible. **Daniel Filan:** Yeah, it’s a tricky situation. **Dylan Hadfield-Menell:** Yeah. I tend to think that this is a really core component of alignment challenges within AI, and I think there’s an additional feature which I’ll add, which is that there are, I believe some unavoidable costs to normative information in the sense that if normative information is generated by people choosing to invoke potentially cognitively costly routines to think about what they want and what type of person they want to be and what type of world they want to live in, then that means that actually there are limits to the amount of information that you should get in say a cooperative IRL, optimal equilibrium, right? In the sense that the solution does not involve fully identifying the person’s utility function even if that’s possible because there are unavoidable costs in them running their brain to generate that information in effect. **Dylan Hadfield-Menell:** And those costs could be direct time costs, but also indirect psychological costs. That idea of normative information and fundamental limits on it. And I think, a gut feeling that there are limits on the amount of normative information we can provide is a lot of what drives our concerns about existential risk, is that we expect that there is this imbalance. **Daniel Filan:** I think there probably are interesting questions I could ask about that, but I can’t think of any. So I’m going to plow straight ahead. **Dylan Hadfield-Menell:** Yes, please do. **Daniel Filan:** So to wrap up the section a little bit, I think one thing you mentioned as an inspiration for this paper was [the Soares et al. paper on corrigibility](https://openreview.net/forum?id=H1bIT1buWH), and corrigibility is this term that gets used in the AI alignment space, particularly I think [Paul Christiano talks about it a lot](https://ai-alignment.com/corrigibility-3039e668638). What do you think the relationship is between this paper and corrigibility as it’s sort of thought of in these spaces? **Dylan Hadfield-Menell:** I think they are very similar models of the world that operate on different assumptions about agent beliefs. So corrigibility, the kind of primary difference that I see between that and off switch game is that it doesn’t include a reference to a human agent in the environment. And this makes a very big difference because that’s where we get our source of potential information about utility. I think one of the key assumptions in their model is that the belief structure of the agent is such that it cannot acquire more information about utility. **Daniel Filan:** So you’re thinking of the Soares et al. paper, is that right? **Dylan Hadfield-Menell:** Yes I’m thinking of the Soares et al. paper there, where the primary result they get is the only way to get a system to choose this type of oversight is through a type of indifference between utilities, which is the solution you get under the assumption that utilities are fully observed or that all information about utility has already been collected. And so which one of these results you sort of take into systems of the future depends on what you think the belief structures of those systems will be, and their relationship with incentives will be. **Daniel Filan:** I mean, if you model systems as learning about the reward function from human behavior, at some point it’s learned all that it can learn from human behavior, right? At some point you use up all the bits and human behavior is no longer - is now basically probabilistically independent of your posterior on the reward function. So there it seems you do end up in this Soares et al. world, is that fair to say? **Dylan Hadfield-Menell:** I mean, if you include the possibility for drift in preferences, for example, then that’s not necessarily true. What we’re assuming about these agents actually is quite unclear, if I’m being fully honest. We’re making some different set of assumptions about what’s possible in the belief structures. Arguably if you had set up the inference correctly and you truly reduce the uncertainty then you do actually not need an off switch. And I think if your modeling assumptions are such that you’ve reached that point and you model a system as behaving optimally, given that information, then the behavior you’ll get is quite clear. I tend to think that a lot of these results point out that creating systems that have limiting behavior as fully rational policies is probably a mistake. **Daniel Filan:** But it’s hard not to do that, isn’t it? **Dylan Hadfield-Menell:** I think it depends on… One of the most effective things that we’ve discovered in the field of artificial intelligence has been mathematics of goals and practical computational routines for goal achievement. So I think that shapes a lot of what we think about for AI, but I don’t think that’s actually where we have to head in the long run. And I think one of the things we’re learning is that actually goal management systems are perhaps as crucial to our qualitative notions of intelligence as goal achievement. I think that we have a strong bias to focus on rational behavior and goal achievement as the definition of intelligence. When I think it’s sort of more like goal achievement was the part of intelligence that we were worst at in the ’60s and ’70s. And as we’ve developed good computational mechanisms for that, I think that we will step away from trying to build systems as a fully optimal Bayesian agent. I think in many cases, you’re already seeing a lot of systems move away from their design, perhaps composed of agents in some ways, but GANs are not Bayesian reasoners. **Daniel Filan:** What are GANs? **Dylan Hadfield-Menell:** GANs are generative adversarial networks. They are a way for modeling let’s say images where let’s say you want to create a function that generates images that looks like natural images or a data set of images. What GANs do is define a problem where one agent is trying to generate images, and then another agent is trying to determine if that generated image or a real image from the dataset, of those two, which one is real. And this creates a type of adaptive loss function for the generator that can lead to very effective image modeling and very photo realistic image generations. **Dylan Hadfield-Menell:** And as we move in these directions for these types of adversarial learning approaches, or look at learning approaches that are robust to things like data poisoning and other real-world complications that come in from deploying these systems. I think we’re actually getting further from our system looking like a direct Bayesian agent, in a way and more towards… Goal achievement will be a part of artificial intelligence going forward, right? It’s not like that part will go away, but I think if you’re looking towards the future, or at least a future where we have successfully built aligned and safe AI systems. I think, as much, if not more, of the artificial cognitive architecture is about goal management as it is goal achievement. **Daniel Filan:** So I have a final question about the off-switch game. This is partially a question, but it’s actually mostly a beg for listeners to work out this problem that I can’t figure out. But maybe you can figure it out. So, if you think about the off-switch game, the scenario, right? You start off with a state, imagine it’s like a small little circle. And this is the state where the robot is considering whether to do an action, turn itself off, or allow itself to be - let the human turn it off or not. **Daniel Filan:** So you’ve got this initial state for it deciding what to do. And from that state, you can imagine an arrow to a state, maybe down on the left, where it does an action, a state down to the right where it turns itself off, there’s an arrow there, and there’s also an arrow going straight down to a state where it lets the human do either. And then from the state where the human could do either, you have this arrow to the left where the human says, “yep, you can do that action”. And then it’s like, “yep, you’re just in the world where the AI got to do the action”. Or an arrow to the right where the human turns it off. And now it’s just turned off as if it had turned itself off. **Daniel Filan:** This is exactly what the picture of - [what a product diagram looks like in category theory](https://en.wikipedia.org/wiki/Product_(category_theory)). You have a function from some set at the top to A cross B, which is at the bottom in the middle. And you can decompose that, from A cross B, you can go to the left to A, and that function can factor out through A, or you can go to B and function could go to B. Is this just a coincidence? Or is there something deep going on here? I can’t figure out what it means, but I feel like there might be something, or maybe it’s just a coincidence. Do you have any thoughts about this? **Dylan Hadfield-Menell:** I am wary that it might be a coincidence. **Daniel Filan:** Yeah. There aren’t that many graphs with four nodes, right? **Dylan Hadfield-Menell:** Right. But I also think, let’s say if you were my PhD student and you came into my office and said, “Ah, I think this is the research problem. You did the off-switch game. I read that. We’re working together now. I want to build on this. I’ve noticed this weird thing about category theory and it looks like there’s at least a graph isomorphism.” And I think my answer would be to warn you that there very well could be nothing here. And I’m not that well calibrated as with estimates, but we’ll call it like, I don’t know, my hunch would put it at like 60 to 80%, call it 70%, that it’s a red herring kind of thing. But that leaves enough that I would be interested in looking into it. And if I lean into trying to think of the ways in which that could be true, perhaps this means that there are some interesting ways to look at the joint human robot system as a product of human and robot behavior. **Dylan Hadfield-Menell:** Or perhaps certain types of interactions can be described as the human and the robot functioning in sequence versus in parallel. So, maybe the question to ask is if this is a product, what is addition, and what is the off-switch or assistance game representation of an additive interaction. **Daniel Filan:** So I spent some time thinking about this and the closest thing I got to the category theory co-product, which is kind of like the disjoint union of sets, which is sort of like addition, is you can kind of tell a story where the robot being transparent ends up being a co-product, but it’s not super convincing. I might leave that to the listeners to see if there’s anything there. **Dylan Hadfield-Menell:** If there are interesting ideas there, send them our way. **Daniel Filan:** Speaking of interesting ideas, we’re going to move onto the third paper we’re going to talk about today. So [Inverse Reward Design](https://arxiv.org/abs/1711.02827). This one you worked on with Smitha Milli and Pieter Abbeel, Stuart Russell, Anca Dragan. Can you summarize this paper for us? **Dylan Hadfield-Menell:** Yes. So, if you were to apply cooperative IRL in scenarios for something other than human-robot interaction, inverse reward design is probably where you might start. What it looks at is a cooperative IRL interaction. So we’ve got two players here. We’ve got the human and the robot. This is now going to be a phase, a game with two turns. Person’s going to go first. And the robot’s going to take, well, go second and potentially take multiple actions. And the way that this game works, is, the person goes first and selects a proxy reward function of some kind, given an observation of a training environment. So, the person gets to see the environment that the robot’s in and picks a proxy reward function. Then the robot goes to a new environment, the person doesn’t know which one that will be. And now the robot’s goal is to maximize utility in this new deployment setting. **Dylan Hadfield-Menell:** And what this is meant to capture, is the idea that, well, so through cooperative IRL and the off-switch game, we’re arguing that uncertainty about objectives is important. Okay, great. We’ll take that point. There’s still a reason why we specify things with objectives. Put differently, there is a reason why the field of artificial intelligence happened upon reward functions, as a way to communicate goals. And, from our perspective, we thought is that this means that reward functions are perhaps an information-dense source of information about reward functions, which certainly makes a lot of sense, right? And so inverse reward design was our attempt to say, “Okay. We have observed a reward function. We know that we should be uncertain about what the true reward is.” So we’re not going to interpret this literally, but then that leads to the question “What else?” What type of uncertainty should you have about the true reward, given an observed proxy? And inverse reward design is our attempt to answer that. And the extra information that we bring in to structure this inference is that notion of a development environment. **Dylan Hadfield-Menell:** It’s not the case that you got this proxy reward function out of the blue. This proxy reward function was designed in the context of a particular environment. And this is what gives us the leverage to do inference with. **Daniel Filan:** So, should I imagine that as, I have some robot, and when I’m writing down the reward function, I’m kind of imagining what it would do with various reward functions and, should I think of this proxy environment as how I’m imagining the robot would behave for different reward functions? **Dylan Hadfield-Menell:** In a sense. The way that I think about this is, let’s say you are a large company. You’re the new company that’s going to create household robots. So, it’s going to be some variant of the [PR2](https://robots.ieee.org/robots/pr2/), which is a robot with wheels and two arms that can move around people’s houses. And you, as a company are building this in order to help tidy people’s living rooms and do things like that. So, what’s your practical strategy that you’re going to go about for doing this? Well, if you have the resources, what you’ll do is you’ll build a training scenario, which will be a giant warehouse, where the insides of it will account for your attempt to cover the space of possible home environments that your system will be deployed into. **Dylan Hadfield-Menell:** So you go ahead and you build this and then what do you do? You hire some roboticists and you say, “Here’s a loose spec of what I want. You make this happen in that environment… You make this happen in those environments.” Right? So these are the design iteration environments that you’re working with. And then what happens is that designer identifies incentives along with a reward function, such that - incentives along with an optimization planning approach, such that the behavior is good in that set of environments. In that set of test environments. **Dylan Hadfield-Menell:** And what we’re saying is that now when the robot leaves that very controlled setting, and goes out into the broader world, what is a good principled way to be uncertain about the things that might be missing from your objectives? And, inverse reward design formalizes that inference problem. **Daniel Filan:** And I guess it’s the inverse of the reward design problem of picking a good reward. **Dylan Hadfield-Menell:** Precisely. **Daniel Filan:** So you run these experiments with an agent that tries to solve this inverse reward design problem. Right? And there are various things you’d say about that, but one thing I’m wondering is, for things that are solving, either explicitly solving this problem well, or somehow trying to reach the optimum, whatever that might be, I wonder how predictable they would be. So for context, there’s this computer program that I use called [Emacs](https://www.gnu.org/software/emacs/) where you type letters and they show up on the screen and it doesn’t try to guess what I mean, it just does what I said. And for me it’s very easy to reason about. There’s also this website I use called [Google](https://www.google.com/), where I type things and it tries to do the most helpful thing, given what I typed. Somehow it seems like it’s doing some inference on what I meant and is trying to satisfy that. **Daniel Filan:** Emacs seems a lot more predictable to me than Google. And I kind of like that property and predictability seems like a really desirable property for high impact AI systems. So I’m wondering how predictable do you think systems would be that were behaving somehow optimally for the IRD problem? **Dylan Hadfield-Menell:** Yeah, that’s a very good question. So, with predictable systems, you’re relying on the person a lot more in a way, because… It doesn’t have to be the case, but if you have a diverse set of behavior that you want to be predictable, then you need to have enough information to pick out the one that will happen, in the future. And I think the reason why we’re in this mess is the systems that are predictable for complex… the types of settings with artificial intelligence, often predictable means predictably bad, I think. And there’s a sense in which the range of things that Emacs can do is much, much smaller. And if you really get into the craziness on it and deeply configure it, you can, I’m sure, get some really unpredictable behavior out of it. **Daniel Filan:** Yeah, you can, it can be an operating system and you can use Google through it. **Dylan Hadfield-Menell:** Right. So I think that’s talking about where the tension is, in the ways that predictability can be a double-edged sword. Now, let’s talk about how predictability comes into inverse reward design. As it turns out, there’s an interesting problem that comes up when you want to actually use the results of inference. It’s a little bit technically involved, but actually quite interesting and I think related to this problem. So bear with me a bit. When you are doing inference over preferences, with the kinds of models that we’re using, there are certain components of the reward function that you are never able to fully identify. So, in this case, we’re using a Boltzmann distribution to define the human’s rationality, to define their behavior. And so this means that all reward functions, if you add a constant to them, end up being the same. **Dylan Hadfield-Menell:** Okay. All well and good, until you do inference and produce a set of these reward functions and now want to maximize expected utility. It turns out, if you do this directly, you actually do a really bad job. And the reason why, is because every reward function that you infer is exactly equally likely to that reward function, plus C for every possible value of C. Now your inference procedure will not find every possible value of C. It will end up at some but not others. And, because you’re doing Bayesian inference in high dimensional spaces, and that’s challenging, you’ll end up arbitrarily setting a lot of these constant values for your different estimates of the reward functions. And then, if you naively add them together, and average, you end up where that noise in your inference, can have an overly large determination on the results. **Daniel Filan:** So it seems like maybe, during inference, let’s say you randomly sample 10 reward functions, right? And get the relevant likelihoods, and the reward functions have different constants added to the reward of every single state. If I take the expectation over those, then it’s like taking the expectation if all the constants were zero, and then adding the expectation of the constants. Right. Because expectations are linear. So, wouldn’t that not affect how you choose between different actions? **Dylan Hadfield-Menell:** So, in theory, no, with enough samples, no. Because those averages would cancel out. **Daniel Filan:** Even with 10 samples though. **Dylan Hadfield-Menell:** Even with 10. So let’s say that the constant value just, let’s say you’re running Markov chain Monte Carlo to do inference. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** That constant value will just be going on a random walk of some kind. And the point where it reaches its minimum, will be unrelated to the actual likelihood of the reward at that point. And so this could be a really good reward that gets driven down a lot. So it could be, you have the reward for, let’s say there’s two actions - action one and action two. We’re doing inference over which one is going to be better. Action one or action two. And, let’s just say that randomly it so happens that action one’s… for the reward functions where action one is better, the sum of the constants on those constants end up being negative. **Daniel Filan:** Oh, so we’re doing like the sampling separately per action. And that’s why the actions are getting different constants. **Dylan Hadfield-Menell:** So the actions have the same constants within a reward function. It’s just that when you’re comparing two reward functions, you’re going to be comparing reward function one for action one, versus reward function two for action two, which decomposes into some real reward value, plus two different constants. And the values of those constants can matter more. **Daniel Filan:** But, but where do they end up mattering? Because if they don’t end up mattering for the likelihood that you’d take the actions, right? Because you mentioned that Boltzmann rational people aren’t sensitive to constant- **Dylan Hadfield-Menell:** This matters when the robot is optimizing its estimate- **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** of utility. And trying to, well, so one thing is that these issues get really exacerbated when you’re trying to do risk averse trajectory optimization, which is where this is all headed in the end. You actually might be right for in expectation, they all cancel. **Daniel Filan:** Yeah. Let’s talk about risk averse trajectory optimization. **Dylan Hadfield-Menell:** What ended up happening was I tried to do risk averse optimization for utility functions, and it totally failed. The first time. And it took a long time for me to figure out why it was failing. And in practice it was because, when you’re looking to maximize reward for, say, the minimum reward function in your hypothesis, that minimization is more often determined by the constants in this reward function inference, than it is by the actual reward values themselves. And it turns out, that in order to fix this problem, what you have to specify is a point that all the reward functions agree on. Right? A way to account for this unbound parameter, so that constant is the same for everything, and standardize it in some way. And this standardization specifies the fallback behavior when the system has high uncertainty about reward evaluations. **Dylan Hadfield-Menell:** So, for me I think this was really interesting to see, because it was a way that… it showed that there’s a component. Like it falls out of a free lunch kind of thing. If you want to do something different when you don’t know what to do, you have to say what to do in that case. You can’t figure it out because you don’t know what to do. And this is very clearly at a low level in the math, telling you, here is the point where you put in the predictable part of what the system will do at deployment time. So, in doing a risk averse optimization here, what that brings in is effectively a check on, are the trade-offs in my new environment similar to the trade-offs that I saw in my training environment? And if not, what should I do? **Daniel Filan:** So you said this comes up when you’re using risk averse planning. Why use risk averse planning? **Dylan Hadfield-Menell:** As opposed to maximizing an expectation? **Daniel Filan:** Expected utility, man, it’s the thing to do. **Dylan Hadfield-Menell:** Well, as a practical matter, maximizing expected utility, isn’t going to do much different than optimizing for the literal proxy that you’ve gotten. It does actually change some things because there are, you might get a proxy reward that isn’t the most likely one. **Daniel Filan:** So in the paper you talk about this gridworld with grass and dirt and lava. I think talking about that makes this clear, so can you introduce that? **Dylan Hadfield-Menell:** Yeah. So what we’re doing is looking at, well, we hypothesize that there’s a 2D robot that’s going to go and navigate some terrain. And in the development environment, the story is that the designer wants the robot to navigate terrain. And there’s two types of terrain that it can be on. There’s really dirt paths and grass. So, picture dirt paths going through a park in some way. And, in addition to that, there are also pots of gold in the park. So there’s three types of terrain. It’s really three kinds of things. There’s regular dirt, there’s grass and there’s pots of gold. And the high level goal for the system is for it to navigate to the pots of gold quickly, and stay on the dirt where possible, but maybe taking shortcuts if it’ll save you a lot of time. And that’s, so if you recall, this is our development environment, and the robot’s now going to get an objective in that setting, and go to a new environment and we’re going to capture, is there something here that the designers didn’t foresee or didn’t intend? **Dylan Hadfield-Menell:** And so in the story, what they didn’t realize was that this robot was… also it was going to be deployed across all 50 states in the U.S. and one of them is Hawaii. So there’s another important terrain type, which is lava. And you haven’t thought about lava before, as the designer. So your reward function doesn’t provide an accurate assessment of the utility in that case. Now, in doing inference, what do you do in the setting? Well, the really simple, intuitive version of what IRD does, is it says, it doesn’t matter what reward value lava had for what I did in the development environment. There were no instances of lava present. And so changing the reward evaluation of that state, would not have changed my behavior. And so I don’t expect the designer to have thought very hard about what that reward evaluation is. **Dylan Hadfield-Menell:** And then, when you get to this deployment environment, and there is this state, this is a principled reason to distrust your inferences on that state. And now, the question is, we arrive here and so now we have this uncertainty distribution, we know what reward functions the person could have meant, really. And it’s got like a lot of uncertainty about how good lava is. Now, planning for that in expectation, you actually might still plan to go right through it, because you’ll just be assuming that, in effect, if you plan in expectation, you are assuming that the states the designer didn’t think about are equal to their prior value, which, in expectation, is a good idea. It very well could be. This lava could actually be, rather than being deployed to Hawaii, you were deployed to some magical land where this is a magic carpet that just transports you instantly to the gold, or something like that. It’s reasonable - from the standpoint of this robot, it’s equally plausible to something catastrophic like lava. **Daniel Filan:** Yeah, I guess. Although in that case, it seems like the problem is one of not knowing the dynamics, rather than the reward. Or let’s suppose that you could drive through lava and the reason that it’s bad is that we humans might want to touch the robot afterwards. And if it went through lava, then we’d burn our hands or something. **Dylan Hadfield-Menell:** Sure, sure. That sounds good. So, what risk aversion does, is it allows us to take advantage of that uncertainty, and adapt our response. And what this says is, well, if there are strategies you can take that don’t go into this uncertain area, you can just, as a heuristic, avoid that uncertainty, even if that increases the amount of path length you have to go through, that can be worth it. And, intuitively, I think the reason why this makes sense, if you’re someone who thinks that we should be maximizing expected utility overall, is really what we’re doing is bringing in some prior knowledge, either about what the priors are on the types of failures that can be. So, you might think that in this we said Gaussian prior over rewards, but that’s actually because we’re being lazy, and we should really be looking to get something that has appropriately heavy-tailed failure modes. And we could try to represent that, and that might be an interesting structure to bring to play. Risk aversion allows us to do that without having to be very specific about what those priors are. **Daniel Filan:** Heavy-tails itself, wouldn’t do it. If the tails are symmetric. **Dylan Hadfield-Menell:** Yes. You’d have to have heavy low tails, like heavy one sided, which is - now you start to get to a point where, mathematically you’re playing around with things to try to get desired systems. And it might be easier to actually go in and modify the objective, in that case. Right. And, in that, we know that there are some of these dynamics at play. Like there are either catastrophic failures or dead ends or something like that. And we are not able to represent that explicitly in this mathematical model, at the inference level. But we can build that in. I’m curious to know what properties of an assistance game make risk aversion the right thing to do. So the way that I look at this is in the same way that corrigibility, or the ability to turn a system off, is kind of like a desired… it’s almost like a specification of a type of solution. And it’s a behavior that intuitively we think is good for assistance games or alignment problems. **Dylan Hadfield-Menell:** I think risk aversion plays a similar role in the sense that in many scenarios, principled conservativism, with respect to your utility, does make sense, and different theoretical constructs could lead to that. So we talked about one thing which was… and they all are different ways of, building in this possibility of catastrophic failure into the belief structure somehow. **Daniel Filan:** I mean one thing it’s related to in my mind is, I don’t know if you’ve listened to [this podcast episode on infra-Bayesianism](https://axrp.net/episode/2021/03/10/episode-5-infra-bayesianism-vanessa-kosoy.html), but it’s basically this notion of imprecise probability, where you plan for the worst case out of the probability distributions. And you can come up with an update role where you can plan for the worst case and also be dynamically consistent, which is nontrivial. And yeah, I’ve just realized there might be connections there. **Dylan Hadfield-Menell:** Yeah, I could certainly see that. Perhaps some of the intuitive reason for why I would want risk aversion has to do with my statement earlier about utilities… The idea that the goal achievement part of intelligence is smaller than we think it is. So, if you think of that goal achievement component is what the system should be doing, in a sense that maximizing utility is the end goal. Then I think from that standpoint, the question, “Well, shouldn’t you just be maximizing expected utility?”, makes a lot of sense. But if you imagine that the goal achievement component is one part of the system, that’s going to be working with representations and abstractions of the real world, to come up with plans to implement, then planning conservatively is probably better in expectation, is perhaps a way to put that. **Daniel Filan:** So speaking about aspects of this IRD framework, kind of like how in CIRL, you had this prior over reward functions that you have to, or even here you have this prior to that you need to analyze, there was also this, the principal’s initial model of the world that bears really heavily on this analysis. And I’m wondering if you can give us a sense of if we’re trying to do this analysis, how we should formalize that. **Dylan Hadfield-Menell:** So, just to clarify, the principal’s model of the world is their model of this development environment, right? **Daniel Filan:** Sorry. Yeah. The development environment that they’re imagining the system being deployed in. **Dylan Hadfield-Menell:** I think there’s two ways to look at it. One is, this is actually the scenario that you were evaluated in during development. Practically, just the system was designed through iteration on behavior in a particular environment. And so the assumptions behind IRD are basically that the reward function was iterated enough to get to behavior that is well adapted to the environment. **Daniel Filan:** And I guess there, you can just know that you like trained on [MNIST](https://en.wikipedia.org/wiki/MNIST_database) or something. **Dylan Hadfield-Menell:** You actually have access to that by definition in a way. **Daniel Filan:** Yeah. For listeners who don’t know, MNIST is a dataset of pictures of handwritten numbers with labels for numbers they are. **Dylan Hadfield-Menell:** That’s kind of the very literal interpretation of that model. The other side of it is I think what you were gesturing at, which is the designer having a model in their head of how the agent will respond, and having an idea of, “Here are the types of environments my system is likely to be in. And here’s the mapping between those incentives and behavior.” **Dylan Hadfield-Menell:** I think if you imagine that as going on inside of someone’s head, what this is really telling you is how to be a good instruction follower. If you are working for me and I tell you to do something or I tell you, “Here are some things that I care about. Here’s some representation of my goals.” If you don’t know what type of context I’m imagining you will be in, then you won’t have much information about how to interpret those objectives, and you’ll miss things. **Dylan Hadfield-Menell:** IRD sort of philosophically here is saying that the way to interpret a goal someone gives you is to think about the environment and contexts they thought you were likely to be in, and that that’s a core piece of cognitive state to be estimating or something like that. **Daniel Filan:** Yeah. So thinking back to the CIRL analysis of the IRD game. In IRD, you sort of have this human policy, which is to write down a reward function that induces good behavior in some test environment. And then you’re analyzing what the best response to that policy is. That produces some robot policy. But it seems probably the best response to that robot policy on the human side would not be the original behavior. So I’m wondering, how many steps towards equilibrium should we imagine being taken? **Dylan Hadfield-Menell:** I think it depends on what information the person has about the environments they’re going to eventually be deployed in. This is going to get confusing because in this setting now we have the designer’s model that they’re having in their mind while they’re designing the reward function, and there’s that environment. And then there’s the set of other environments that the system might be put into. And if you want to design a best response to the robot policy, you have to work backwards from that future sequence of behaviors. **Daniel Filan:** Yeah. Maybe you need to do it in a POMDP rather than an MDP, a world where there are some things about the state of the world that the human doesn’t know. **Dylan Hadfield-Menell:** Yeah. Perhaps. In some ways, the point of this model is to capture the scenarios where people aren’t thinking very much about - or the extent to which… Maybe here’s a good way to put it. In this model, we’re actually leaving out some really important, potentially large pieces of information, which is that the selection of development environments is not arbitrary or random. In fact, we tend to select them in order to communicate things about the objective that we think are important. **Dylan Hadfield-Menell:** So in some ways the development environment kind of captures our best estimate in the spirit of this model of how the robot will be deployed. And in that case, the person is actually going pretty far, potentially to equilibrium actually, for them. You are absolutely right that thinking more or harder or about the way that the robot will deploy could lead to changes in the proxy reward functions that it’s optimal to specify. **Dylan Hadfield-Menell:** And you are also right, I think you could also have that… The best thing to do, if you realize that there’s a part of your deployment environment that’s not well-represented in your development environment, is you just augment the development environment to include that component. And then you are providing incentives that are kind of better matched. Yeah. **Dylan Hadfield-Menell:** I’m trying to think of the right summary to close this question on. I think it’s that there’s definitely a lot of additional iteration that could be possible here, and the opportunity for more coordination. I’m not sure that it makes sense to study those sort of directly within this model as it is represented, in the sense that it seems like that type of strategic improvement is perhaps better represented by changes to the development environment, or you end up assuming more… Or you would want to have a better richer cognitive model of reward design and how reward design could go wrong perhaps. **Dylan Hadfield-Menell:** I think in setting this up, part of the core idea is if the person is wrong about the objectives, if the proxy reward is not actually the true reward, what can you do? And so it only really makes sense to study that if people are limited. **Daniel Filan:** That makes sense. One kind of way you could think about this work is in the context of side effects mitigation, like this example with the lava, right? It was this side effect that humans didn’t think about like, “Oh, what if the robot gets into lava?” And this is a way of avoiding the robot having some nasty side effects. **Daniel Filan:** Yeah, I guess this episode hasn’t been released, so you don’t know it yet, but the listeners will have just… [The previous episode of this podcast](https://axrp.net/episode/2021/05/14/episode-7-side-effects-victoria-krakovna.html) was with Victoria Krakovna on side effects mitigation. **Daniel Filan:** You’ve actually some work yourself on this problem. You’ve coauthored with Alexander Turner on [Attainable Utility Preservation](https://arxiv.org/abs/1902.09725). I’m wondering how do you think the side effects problem… Yeah, what do you think about the relationship between the side effects problem and this approach of inverse reward design? **Dylan Hadfield-Menell:** Well, they’re sort of slightly different in that one is a solution and the other is a problem. **Daniel Filan:** Do you think they’re matched? **Dylan Hadfield-Menell:** I’m not sure. There certainly are some definitions of side effects for which it’s well-matched to this problem. At the same time that there are some kinds of side effect avoidance strategies… Well, here, I guess what I’ll say is side effect avoidance is a pretty broad range of approaches, and so I wouldn’t want to rule out other solution approaches that don’t leverage inverse reward design in its particular Bayesian form in some way. **Dylan Hadfield-Menell:** With that said, I think you can describe a lot of side effect avoidance approaches in language similar to the model that we’re producing here. What counts as a side effect? What types of problems do we evaluate on? They almost always end up being, well, here’s some environment where… Here’s the reward function that the system got, which is intuitively reasonable. And here is the side effect, which is why that’s wrong. And here is how optimizing for this relatively generic term can allow you to avoid it. **Dylan Hadfield-Menell:** And that reasoning often relies on some intuitive agreement of, this reward is reasonable. And what IRD does is kind of provides a probabilistic definition of reasonableness, where rewards are reasonable if they work well in a development environment. **Dylan Hadfield-Menell:** When I look at a lot of these side effects examples, I often, in my head, sort of translate them to, “Oh, well there was a simpler environment where this particular action wasn’t possible. And that’s where the reward comes from, and now they’re looking to go into this other setting.” **Dylan Hadfield-Menell:** There are interesting approaches to that problem, which don’t come from a Bayesian reward inference, uncertainty perspective that I’m guessing that you guys talked about some of the [relative reachability work](https://arxiv.org/abs/1806.01186) that she’s been involved in. **Dylan Hadfield-Menell:** This is kind of a different perspective to kind of side effect avoidance, which would avoid lava if there are certain properties of the transition function that make that bad, but wouldn’t avoid it if it’s merely the case that whether lava was good or bad was kind of left out of the objective. **Dylan Hadfield-Menell:** And so I think these are perhaps, maybe the… Maybe that’s why I didn’t want to say that there’s a one-to-one relationship between side effect avoidance and inverse reward design. In that there’s another class of side effect avoidance which involves doing things like preserving the ability to put things back. I think that that is solving a similar problem and I don’t want to claim that it’s the same. **Daniel Filan:** Now I want to move on to some closing questions about the line of work as a whole. First of all, these papers were published between 2015 and 2017, if I recall correctly, which was some time ago. Is there anything you want to say about… You’ve mentioned some papers that have been published in addition to the ones we’ve talked about. Would you like to give listeners an overview of what’s being done since then on this line of thinking? **Dylan Hadfield-Menell:** Sure. We’ve extended inverse reward design in a couple of directions to look at both active learning as well as using it to fuse multiple reward functions into a single one, which can make designing reward functions actually easier, so if you only have to worry about one development environment at a time, you can do a kind of divide and conquer type of approach. **Daniel Filan:** And that’s work with Sören Mindermann? **Dylan Hadfield-Menell:** The [divide and conquer work](https://arxiv.org/abs/1806.02501) is with Ellis Ratner. And the work on [active learning](https://arxiv.org/abs/1809.03060) was with Sören Mindermann and Rohin Shah as well as Adam Gleave. **Daniel Filan:** All right. **Dylan Hadfield-Menell:** I also mentioned [the assistive multi-armed bandit paper](https://arxiv.org/abs/1901.08654), which is looking at what happens when the person doesn’t observe their reward function directly but has to learn about it over time. We’ve done some work on the mechanics of algorithms within cooperative IRL. For folks who are interested in it, I recommend reading our ICML paper. **Daniel Filan:** Is that about the generalized Bellman update? **Dylan Hadfield-Menell:** Yeah. This is [a generalized Bellman update for cooperative inverse reinforcement learning](http://proceedings.mlr.press/v80/malik18a.html), I think. And in that, we actually give an efficient algorithm for computing optimal cooperative IRL strategy pairs. **Daniel Filan:** That’s Malayandi Palaniappan and Dhruv Malik? **Dylan Hadfield-Menell:** Yeah, Malayandi Palaniappan and Dhruv Malik. That’s a paper that I definitely recommend folks interested in this work go look at because that’s… If you want to experiment with non-trivial cooperative IRL games, those algorithms are the ones that you’d want to go look at. **Daniel Filan:** Okay. **Dylan Hadfield-Menell:** I mentioned also [a paper that looks at value alignment in the context of recommender systems](https://participatoryml.github.io/papers/2020/42.pdf), which doesn’t directly use cooperative IRL, but is applying some of those ideas to identify misalignment and talking about it more generally. **Dylan Hadfield-Menell:** There’s a paper called [Incomplete Contracting and AI Alignment](https://arxiv.org/abs/1804.04268), which looks at the connections between these assistance games and a really broad class of economics research on what’s called incomplete contracting, which is our inability to specify exactly what we want from other people when we’re contracting with them. **Dylan Hadfield-Menell:** An example of incomplete contracting that people will use for a long time is all of the things that happened when we went into [lockdown](https://en.wikipedia.org/wiki/COVID-19_lockdowns) for [COVID](https://en.wikipedia.org/wiki/COVID-19), which is lots of people had contracts for exchanging money for goods or what have you, and then all of a sudden a pandemic happened, which was maybe written into contracts in some very, very loose ways, like acts of God clauses, which are contracting tools to manage uncertainty and our inability to specify what should happen in every possible outcome of the world. **Dylan Hadfield-Menell:** There’s a very strong argument to make that assistance games are really studying AI under incomplete contracting assumptions. In the sense to which your reward function that you specify for your system is in effect a contract between yourself and the system. **Dylan Hadfield-Menell:** I think there’s [the paper on attainable utility preservation](https://arxiv.org/abs/1902.09725) with Alex Turner, which I would recommend folks look at, where the main idea there is we’re really measuring distance and impact via change in a vector of utility functions. And we show that this has some really nice properties for limiting behavior and some practical applications in some interesting side effect avoidance environments. **Dylan Hadfield-Menell:** And the last thing I would direct people towards is my most recent papers on the subject, which are [multi-principal assistance games](https://arxiv.org/abs/2007.09540), which looks at what happens, some of the interesting dynamics that come up where you have multiple people who are being learned from in assistance game, and [consequences of misaligned AI](https://arxiv.org/abs/2102.03896), which is a theoretical analysis of proxy objectives where we show that missing features in a fairly strong theoretical sense can lead to arbitrarily bad consequences from a utility perspective. **Daniel Filan:** All right. And I guess the dual to that question is what do you think the next steps for this research are? What needs to be done? **Dylan Hadfield-Menell:** The thing I’m most excited about is integrating meta reasoning models into cooperative IRL games. So looking at cooperative IRL where one of the questions the person has to decide for themselves is how hard to think about a particular action or a particular problem. **Dylan Hadfield-Menell:** I think that this is something that’s missing from a lot of our models. It places strong limits on how much you can learn about utilities, because there are costs to generating utility information that are distinct from actions in the world and are avoidable in some sense. **Dylan Hadfield-Menell:** I also think that because people choose how much to think about things and choose what types of computations to run, this makes the relationship between the person and the system even more crucial, because the system may have to do more explicit things to induce the person to provide the information that you need. **Dylan Hadfield-Menell:** So if you think about more regular cooperative IRL, the system might need to steer the person to an informative region of the environment where they can take actions that provide lots of information about reward. When you introduce meta reasoning and costly cognition, now the robot has to steer the person in belief space to a place where the person believes that choosing to take those informative actions is worthwhile. And I think that that is a more complicated problem. **Dylan Hadfield-Menell:** On the other side, actually, figuring out how to build systems that are calibrated for cognitive effort, I think would be really valuable to point this towards recommender systems, which is something I’ve been thinking about a lot recently. There is a critique, which I think is pretty valid, that a lot of the issues we’re running into stem from the fact that our online systems are largely optimizing for system 1 preferences. **Daniel Filan:** What’s system 1? **Dylan Hadfield-Menell:** System 1 is a reference to [Kahneman’s model of the brain](https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Two_systems) where people have two systems. One is system 1, which is a fast, reactive, intuitive reasoning system, and system 2, which is a slower logical conscious reasoning system. **Dylan Hadfield-Menell:** The point that I was trying to make is that a lot of our behavior online particularly is reactive, not very thought out. You could argue, in fact, that these systems are designed to push us into that reactive mode. One of the things that you could imagine would be a better situation is if people had more conscious cognitive effort going into what kinds of online behaviors they actually want. **Dylan Hadfield-Menell:** And one way to understand this is the appropriate amount of cognitive effort to spend deciding whether or not to click on a link, if the only impact of your actions are you might read a bad article and stop, or you might read a good article, there’s a certain amount of how hard you should think about that, optimally. **Dylan Hadfield-Menell:** However, if clicking on that link determines what links you will see in the future, the appropriate amount of cognition increases because your actions have a larger effect. And I think that miscalibrating this component, this kind of misalignment in a way is… You can think about the system as being misaligned with people or you can think about people as being misaligned with what the system will do in the future, and kind of choosing the wrong level of cognition for their future selves. **Daniel Filan:** So those are some next steps. But suppose maybe today there’s a listener who is a deep learning practitioner. They use deep learning to make things that they want people to use. How do you think they should change what they’re doing based on assistance game analysis? **Dylan Hadfield-Menell:** Well, I think one is a kind of intuitive shift in how you look at the world, to explicitly recognize that there’s a very subjective, normative component of what you are doing when you program your system. There’s an idea that our data is labeled with what we want it to do, and that sort of is true or something like that. **Dylan Hadfield-Menell:** In reality, the role that labels in a deep learning system play or the reward function in MDP plays is it’s a description of what you want for the system. It’s a way that you are trying to encode your goals into a representation that can be computed on. I think that if we, as a field, actually just shifted to thinking more about those types of questions, spending more time thinking about the source of normative information in our systems and thinking explicitly about, “Do I have enough information here to capture what I really care about?” **Dylan Hadfield-Menell:** This is a tweet from [Dr. Alex Hanna](https://twitter.com/alexhanna), who’s at Google Research, and what they pointed out is that for say a large language model, you can have trillions of parameters, but ultimately there’s only hundreds of decisions that go into selecting your data. And part of our difficulty in getting these large language models to output the text we want, I think, is because of that kind of imbalance, right? It’s complicated because language actually has lots of normative information in it, but because it’s observational and you can predict, this leads to that, I see this text, I produce that text, you don’t interpret that as normative information. **Dylan Hadfield-Menell:** I tend to think that that is the biggest lesson for practice. I would say the other part about it is what is your strategy for integrating qualitative research and analysis into your process? Actually, if you just want to be a good deep learning researcher, and you just want to produce neural nets that do a good job, paying attention to this loop and trying to optimize this loop from, what are features of my system behavior that are not currently represented that I don’t like, or that I do like? Identifying those behaviors, developing measurements for them, and then integrating that back into the system. The deep learning engineers and the deep IRL engineers who are the best, I think are really good at completing this loop really effectively. And they’ve got intuitive skill at completing that loop, but also they just have really good methods and they’re pretty formulaic about it. **Dylan Hadfield-Menell:** I think for whatever scale system you’re building, if you’re just building it on your own, you’re a grad student trying to do policy optimization for a robot, this is a good idea. And if you’re a large company that’s managing search results for the global population, you also want to be doing this. And if you look at search actually, this is a relatively well-developed area with good standards of what makes a good search result. And there’s lots of human raters providing explicit feedback about what makes good search results. Arguably part of why that works is because, well, we’re paying for the right kind of data. **Daniel Filan:** All right. Another related question. So this is one I kind of like to ask. People worried about AI existential risk, they sort of have this idea that, “Look, there were people who did some work to develop really smart AI, and they didn’t stop to ask the question ‘Okay, how could the work I’m doing have negative impacts?’” How could the work that you’re doing have negative impacts on the world? And you’re not allowed to say, “Oh, the negative impact is we wasted our time and it didn’t pan out.” **Dylan Hadfield-Menell:** No. There are some really clear negative impacts that this work enables. Effective alignment with concentrated power distributions is a recipe for some pretty bad situations and environments. Single agent value alignment on its own really just allows people to get systems to do what they want more effectively. Arguably, value alignment is as dual use as a technology could be, at least as it’s currently thought about. **Daniel Filan:** Sort of like a power plant, right? It’s just like a generally empowering thing. Now you just have a way to get more electricity, now you can do things more. **Dylan Hadfield-Menell:** Yeah. It’s sort of like a power plant though where someone might just build their own power plant. **Daniel Filan:** Yeah, like a home power plant. **Dylan Hadfield-Menell:** And sort of use it for their own purposes. And if you had some people with power and others without, that could lead to scenarios where the people out of power are treated really poorly. I think arguably that would… I mean, arguably that’s what AI systems are doing to some extent nowadays, in terms of cementing existing power dynamics. Alignment could supercharge that in a way. **Dylan Hadfield-Menell:** One of the concerns that I have about effectively aligning systems to an individual is that it might be fundamentally immoral to get to a scenario where one individual has undue influence over the future course of the world. That’s a sort of direct one-to-one for these power imbalances. I think there’s a practical - short-term, right now, if Facebook gets really good at aligning systems with itself and we don’t get good at aligning Facebook with us, that’s potentially bad. And I think if you start to think about future systems and systems that could reach strategic dominance, then alignment with… You might want to have alignment approaches that cannot align to an individual but have to align to a group in some way. I don’t know if that’s the - It’s a little bit vague. **Daniel Filan:** That’s a fine answer. **Dylan Hadfield-Menell:** It’s kind of like if we imagine that preferences don’t actually reside within an individual but reside within societies, then alignment to an individual that allows that individual’s preferences to capture a lot of utility could be quite bad. **Daniel Filan:** If people are interested in following or engaging with you and your work, how should they do that? **Dylan Hadfield-Menell:** Yeah. I have a moderately active Twitter account where I publicize my work and I generally tweet about AI safety and AI alignment issues. That’s [@dhadfieldmenell](https://twitter.com/dhadfieldmenell), first initial, last name. I’ll put out a plug for if you are a… Well, essentially if you’re interested in doing this type of work and you thought this conversation was fun and you’d like to have more conversations like it with me, I’ll invite you to [apply to MIT’s EECS PhD program](https://gradapply.mit.edu/eecs/apply/login/?next=/eecs/) next year and mention me in your application. **Daniel Filan:** All right. Well, thanks for appearing on the podcast. **Dylan Hadfield-Menell:** Thanks so much. It was a real pleasure to be here. **Daniel Filan:** And to the listeners, I hope you’ll join us again. **Daniel Filan:** This episode is edited by Finan Adamson. The financial costs of making this episode are covered by a grant from the [Long Term Future Fund](https://funds.effectivealtruism.org/funds/far-future). To read a transcript of this episode, or to learn how to support the podcast, you can visit [axrp.net](https://axrp.net/). Finally, if you have any feedback about this podcast, you can email me at [feedback@axrp.net](mailto:feedback@axrp.net).
7ee2ab83-4f80-404e-bc95-83d9f881dfd3
trentmkelly/LessWrong-43k
LessWrong
In Favor of Niceness, Community, and Civilization [Content warning: Discussion of social justice, discussion of violence, spoilers for Jacqueline Carey books.] [Edit 10/25: This post was inspired by a debate with a friend of a friend on Facebook who has since become somewhat famous. I’ve renamed him here to “Andrew Cord” to protect his identity.] I. Andrew Cord criticizes me for my bold and controversial suggestion that maybe people should try to tell slightly fewer blatant hurtful lies: > I just find it kind of darkly amusing and sad that the “rationalist community” loves “rationality is winning” so much as a tagline and yet are clearly not winning. And then complain about losing rather than changing their tactics to match those of people who are winning. > > Which is probably because if you *really* want to be the kind of person who wins you have to actually care about winning something, which means you have to have politics, which means you have to embrace “politics the mindkiller” and “politics is war and arguments are soldiers”, and Scott would clearly rather spend the rest of his life losing than do this. > > That post [the one debunking false rape statistics] is exactly my problem with Scott. He seems to honestly think that it’s a worthwhile use of his time, energy and mental effort to download evil people’s evil worldviews into his mind and try to analytically debate them with statistics and cost-benefit analyses. > > He gets *mad* at people whom he detachedly intellectually agrees with but who are willing to back up their beliefs with war and fire rather than pussyfooting around with debate-team nonsense. > > It honestly makes me kind of sick. It is exactly the kind of thing that “social justice” activists like me *intend* to attack and “trigger” when we use “triggery” catchphrases about the mewling pusillanimity of privileged white allies. In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence.
0af53bf0-a06b-4444-9923-6c1d91944579
trentmkelly/LessWrong-43k
LessWrong
Review of The Engines of Cognition Introduction[1] For those who read my review of the first Lesswrong collection of essays, A Map that Reflects the Territory (see here for my review), this intro will give you a sense of what the Klingons call nlb’poH, the French call Déjà vu, and the English call Déjà vu. Less Wrong is a forum founded by Artificial Intelligence Theorist Eliezer Yudkowsky in 2009. The stated philosophy is:  > We are a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we work to develop and practice the art of human rationality.  That seems to cover a lot of ground! The actual topics seem to be (1) how does one find the truth in science and in life, (2) AGI (Artificial General Intelligence), and (3) probability. The most common non-trivial word in this book might be Bayes. Another common non-trivial word is Goodhart. (Goodhart’s law is that when a measure becomes a target, it stops being a measure. It is often referred to when an AI system performs well but for the wrong reasons.) A trivial word would be something like the which is likely more common but less interesting. (Or is it trivial? The SIGACT News book review editor Fred Green pointed out that Ohio State has trademarked the. See https://www.cnn.com/2022/06/23/us/ohio-state-university-trademarks-the/index.html. I do not know if that is more or less absurd than Donald Trump’s failed attempt to trademark you’re fired. See https://www.cobizmag.com/who-owns-the-trademark-to-youre-fired/ to see who really owns the trademark to you’re fired.)  The Engines of Cognition are actually a set of four books, titled Trust, Incentives, Modularity, and Failure. Each book is small—about 9 inches long and 5 inches wide. They can be read in any order. This set of book is a best-of-2019 collection as decided by the readers in some fashion. 2 General Comments PROS: Many of the essays bring up a topic point that I had not thought of be
56309ce5-4bb6-4c4e-9271-81367a0c5103
trentmkelly/LessWrong-43k
LessWrong
A short dialogue on comparability of values Q: Are all of a person's values comparable with each other? For example, is a candlelit dinner comparable to a sunset walk on a beach? A: Of course. You can ask the person to choose between these two things. Their answer will give you information about what they value more. Q: What if the person can't choose? A: Then they probably value these two things about equally. Q: Okay, I have another question. Are all abstract concepts comparable to each other by weight? A: Come again? Q: I mean, we can ask a person: "Is one mile heavier than one hour or vice versa?" That will give you information about that person's weight function, do they assign more weight to one mile or one hour. A: The person can't choose, because the question is nonsense. Q: But by your own argument above, doesn't that mean they weigh these things about equally? A: It's different, because the question about value feels more meaningful to the person, even if they can't give an answer. Q: But a question can feel meaningful without being about anything real. For example, questions about gods and demons feel meaningful to many people. What if questions about which thing is more valued are also like that? A: The difference is that value doesn't only manifest in answers to questions, it also manifests in what actions people choose. Q: Do you mean, for a specific binary choice you can imagine a person in a room faced with two buttons and so on? A: Exactly. Q: Very well. Imagine a person in a room faced with two buttons, saying "one mile is heavier than one hour" and "vice versa". A: Screw you! ---------------------------------------- Tedious explanation of the joke: I've long been puzzled by the argument that we can ask people to choose between things, therefore people have preferences. Today I realized how to kick that argument down: by pointing out that you can ask people anything at all. So the mere act of asking can't be evidence that the question is meaningful. Very quickly this dialogu
dee396e4-795a-423a-bc04-031936302d11
StampyAI/alignment-research-dataset/blogs
Blogs
Post-fundraiser update We concluded our [2016 fundraiser](https://intelligence.org/2016/09/16/miris-2016-fundraiser/) eleven days ago. Progress was slow at first, but our donors came together in a big way in the final week, nearly doubling our final total. In the end, donors raised **$589,316** over six weeks, making this our second-largest fundraiser to date. I’m heartened by this show of support, and extremely grateful to the 247 distinct donors who contributed. We made substantial progress toward our immediate funding goals, but ultimately fell short of our $750,000 target by about $160k. We have a number of hypotheses as to why, but our best guess at the moment is that we missed our target because more donors than expected are waiting until the end of the year to decide whether (and how much) to give. We were experimenting this year with running just one fundraiser in the fall (replacing the summer and winter fundraisers we’ve run in years past) and spending less time over the year on fundraising. Our fundraiser ended up looking more like recent summer funding drives, however. This suggests that either many donors are waiting to give in November and December, or we’re seeing a significant decline in donor support: Looking at our donor database, preliminary data weakly suggests that many traditionally-winter donors are holding off, but it’s still hard to say. This dip in donations so far is offset by the Open Philanthropy Project’s generous $500k grant, which raises our overall 2016 revenue from $1.23M to $1.73M. However, $1.73M would still not be enough to cover our 2016 expenses, much less our expenses for the coming year: (2016 and 2017 expenses are projected, and our 2016 revenue is as of November 11.) To a first approximation, this level of support means that we can continue to move forward without scaling back our plans too much, but only if donors come together to fill what’s left of our **$160k gap** as the year draws to a close:   **$160,000** | | $0 | $40,000 | $80,000 | $120,000 | $160,000 ### We’ve reached our minimum target!   In practical terms, closing this gap will mean that we can likely trial more researchers over the coming year, spend less senior staff time on raising funds, and take on more ambitious outreach and researcher-pipeline projects. E.g., an additional expected $75k / year would likely cause us to trial one extra researcher over the next 18 months (maxing out at 3-5 trials). Currently, we’re in a situation where we have a number of potential researchers that we would like to give a 3-month trial, and we lack the funding to trial all of them. If we don’t close the gap this winter, then it’s also likely that we’ll need to move significantly more slowly on hiring and trialing new researchers going forward. Our main priority in fundraisers is generally to secure stable, long-term flows of funding to pay for researcher salaries — “stable” not necessarily at the level of individual donors, but at least at the level of the donor community at large. If we make up our shortfall in November and December, then this will suggest that we shouldn’t expect big year-to-year fluctuations in support, and therefore we can fairly quickly convert marginal donations into AI safety researchers. If we don’t make up our shortfall soon, then this will suggest that we should be generally more prepared for surprises, which will require building up a bigger runway before growing the team very much. Although we aren’t officially running a fundraiser, we still have quite a bit of ground to cover, and we’ll need support from a lot of new and old donors alike to get the rest of the way to our $750k target. Visit **[intelligence.org/donate](http://intelligence.org/donate)** to donate toward this goal, and do spread the word to people who may be interested in supporting our work. You have my gratitude, again, for helping us get this far. It isn’t clear yet whether we’re out of the woods, but we’re now in a position where success in our 2016 fundraising is definitely a realistic option, provided that we put some work into it over the next two months. Thank you. --- **Update December 22**: We have now hit our $750k goal, with help from end-of-the-year donors. Many thanks to everyone who helped pitch in over the last few months! We’re still funding-constrained with respect to how many researchers we’re likely to trial, as described above — but it now seems clear that 2016 overall won’t be an unusually bad year for us funding-wise, and that we can seriously consider (though not take for granted) more optimistic growth possibilities over the next couple of years. December/January [donations](https://intelligence.org/donate/) will continue to have a substantial effect on our 2017–2018 hiring plans and strategy as we try to assess our future prospects. For some external endorsements of MIRI as a good place to give this winter, see a suite of recent evaluations by [Daniel Dewey](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016#Machine_Intelligence_Research_Institute), [Nick Beckstead](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016#Machine_Intelligence_Research_Institute-0), [Owen Cotton-Barratt](http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/), and [Ben Hoskin](http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/). The post [Post-fundraiser update](https://intelligence.org/2016/11/11/post-fundraiser-update/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
976f740b-fcff-4547-9eec-992d8f7c8c96
trentmkelly/LessWrong-43k
LessWrong
A Toy Model of Hingeyness This is a crosspost from the Effective Altruism forum Epistemic status: Attempt to clarify a vague concept. This should be seen as a jumping of point and not as a definitive model. Definition of Hingeyness > The Hinge of History refers to a time when we have an unusually high amount of influence over the future of civilization, compared to people who lived in the eras before and after ours. I will use the model I made for my previous question post to explain why I don't think this definition is very useful. As before, in this model are only two possible choices per year. The number inside the circle refers to the amount of utility that year experiences and the two lines are the two options that this year has to decide on. The amount of utility which each option will add to the next year is written next to the lines. (link to image) Older decisions are hingier I think we all agree that we should try to avoid the option that will lead to better results in the next year, but will create less utility in the long run. In this model the year with 1 utility could choose the +2 option, but it should choose the +1 option because it leads to better options next year. Let's assume that all life dies after the last batch of years. The 1 utility then 3 utility then 0 utility option is the worst because you've generated 4 utility in total. 1-3-6 is just as good as 1-2-7, but 1-2- 8 is clearly the best path. The implication is that later decisions are never hingier than earlier ones. 1 gets a range of options that ranges from 4 utility to 11 utility, no other option get's that kind of range. In fact, it's mathematically impossible that future decisions have a range of options that's larger than the previous decisions had (assuming the universe will end and isn't some kind of loop). It's also mathematically impossible that future decisions have ranges where the best and worst case scenarios give you more utility than the range of the previous years. This is, unless negativ
582597fe-947c-4e45-ad2b-bcaee9401771
trentmkelly/LessWrong-43k
LessWrong
A terrifying variant of Boltzmann's brains problem I am new to this forum even though I have read a lot on the subject of boltzmann's brains and have come to a terrifying conclusion, the classic argument which for example Sean Carroll used, "The cognitive instability" (https://arxiv.org/abs/1702.00850), is not enough to dispel the doubt, his argument states that if thanks to scientific observations I come to the conclusion that at 99% I should be a Boltzmann brain, then follows that my perceptions are more likely to be random and therefore inconsistent, one falls into a contradiction where if it is true that I am a Boltzmann brain then it does not sense to think so because all the observations that led me to think of it are problably wrong because they are random. This makes sense for heaven's sake, but there are several problems. The first is that one could argue that a Boltzmann brain could not even survive, it would die immediately from lack of oxygen, blood and nutrients. The second is that it would not be able to form in the sense that, being the Boltzmann brains slowly form (same link above of Sean Carroll), the "meat" of the brain would freeze or should decaying for lack of vital support. Now, a digression, I did not understand how exactly boltzmann's brains are formed, on Wikipedia, they cite 2: by quantum fluctuation and via nucleation, Sean Carroll cites 3: Vacuum fluctuations, Measurement-induced fluctuations, Boltzmann fluctuations. Now there is a problem, I don't understand too much about physics, if please someone can explain to me at least those from Wikipedia, from what I "understand" is that in quantum fluctuations, the object appears already fully formed, while for those on nucleation, are slowly formed with the accidental union of atoms, although I'm not sure because Wikipedia mentions strange concepts such as de Sitter's cosmological horizon. A question that follows is: particles and atoms in a vacuum can really unite to form macroscopic objects such as a brain, a stone, a computer, etc.. witho
cb805221-0fb5-4bbf-a506-d53fbe3ad328
trentmkelly/LessWrong-43k
LessWrong
CFAR-run MIRI Summer Fellows program: July 7-26 CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem. The intent of the program is to boost participants as far as possible in four skills: 1. The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops; 2. “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences.  (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.) 3. The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence). 4. The basics of AI safety-relevant technical research.  (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.) The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need. If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/ Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.
7ad385df-3862-4a96-912b-a4f5db35f276
trentmkelly/LessWrong-43k
LessWrong
Genies and Wishes in the context of computer science Using computers to find a cure What it could be like to make a program which would fulfill our wish to "cure cancer"? I'll try to briefly present the contemporary mainstream CS perspective on this. Here's how "curing cancer using AI technologies" could realistically work in practice. You start with a widely applicable, powerful optimization algorithm. This algorithm takes in a fully formal specification of a process, and then finds and returns the parameters for that process for which the output value of the process is high. (I am deliberately avoiding use of the word "function"). If you wish to cure a cancer, even having this optimization algorithm at your disposal, you can not simply write "cure cancer" on the terminal. If you do so, you will get something to the general sense of: > No command 'cure' found, did you mean: >  Command 'cube' from package 'sgt-puzzles' (universe) >  Command 'curl' from package 'curl' (main) The optimization algorithm by itself not only does not have a goal set for it, but does not even have a domain for the goal to be defined on. It can't by itself be used to cure cancer or make paperclips. It may or may not map to what you would describe as AI. First, you would have to start with the domain. You would have to make a fairly crude biochemical model of the processes in the human cells and cancer cells, crude because you have limited computational power and there is very much that is going on in a cell. 1 On the model, you define what you want to optimize - you specify formally how to compute a value from the model so that the value would be maximal for what you consider a good solution. It could be something like [fraction of model cancer cells whose functionality is strongly disrupted]*[fraction of model noncancer cells whose functionality is not strongly disrupted]. And you define model's parameters - the chemicals introduced into the model. Then you use the above mentioned optimization algorithm to find which extra parameter