id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
58191018-e08d-4abf-85c8-83bd68ba0767
trentmkelly/LessWrong-43k
LessWrong
Poll Results on AGI The poll has now settled quite a bit. I have been a bit busy and so it took me a bit longer than expected to write this up – but without further ado let us look at some of the results. Over the past month 74 people voted, 70 people were grouped, and 2,095 votes were cast, with 55 statements submitted. I will briefly list some of the result together with some of my own thoughts, but if you want more detail go look at the auto-generated report. Some Results We will look at some of the majority opinions first, that is, statements that most people agree (or disagree) with. We weigh both percentage and number of people voted. All number can be found in the report. * Corporate or academic labs are likely to build AGI before any state actor does. (75% Agree, 18% Pass) * I believe there needs to be a bigger focus on technical research (64% Agree, 30% passed) *  AI-to-human safety is fundamentally the same kind of problem as any interspecies animal-to-animal cooperation problem (70% disagree, 12% passed) *  I think it is most likely the first AGI will be developed in a leading AI lab, but still plausible it will be developed by a smaller group (65% Agree, 17% Passes) * A sufficiently large AI-related catastrophe could potentially motivate the US government to successfully undertake a pivotal act. (60% Agree, 21% Passed) Opinion Groups Pol.is automatically generates opinion groups based on similar voting patterns. In this case the algorithm identified two groups A and B. They can roughly be identified with group A believing that AGI will come soon and will be very dangerous, whereas group B believes that AGI will take some time and be only somewhat dangerous. (During the voting there were three stable groups for a while, where the third group roughly believed that AGI will come soon, but won't be that dangerous. ) So let us see what beliefs makes group A special. The most important one is that "[they] think the probability of AGI before 2040 is above 50%". Ninety
d6146ac4-d650-41c7-9601-ff7ded367dce
trentmkelly/LessWrong-43k
LessWrong
The Laws of Magic > People are always telling you that "we have always done thus", and then you find that their "always" means a generation or two, or a century or two, at most a millennium or two.  Cultural ways and habits are blips compared to the ways and habits of the body, of the race.  There really is very little that human beings on our plane have always done, except find food and drink, sing, talk, procreate, nurture the children, and probably band together to some extent. > > - Ursula K. Le Guin, "Seasons of the Ansarac", Changing Planes Human cultures vary wildly and dicursively, so it is worth noting which things all known human societies have in common.  Several generations ago, anthropologists noted that cultures' beliefs about a suite of concepts crudely describable as 'magic' had certain principles in common. Humans seem to naturally generate a series of concepts known as "Sympathetic Magic", a host of theories and practices which have certain principles in common, two of which are of overriding importance.  These principles can be expressed as follows:  the Law of Contagion holds that two things which have interacted, or were once part of a single entity, retain their connection and can exert influence over each other; the Law of Similarity holds that things which are similar or treated the same establish a connection and can affect each other. These principles are grossly, obviously, in contradiction with everyday experience.  Thusly many cultures restrict the phenomena to which the laws supposedly apply to non-standard, special cases, most especially to individuals who it is asserted have unusual powers or ritual actions that are not commonly replicated in normal life.  Examples range from African sorcerers could supposedly bring death to their enemies by stabbing their footprints, the Imperial City of ancient China which was designed to function as a stylized representation of the whole of the country and induce peace as long as the Emperor sat in his throne
bc7638f7-cf6a-4f7f-a64d-0f8a19bd3121
trentmkelly/LessWrong-43k
LessWrong
LW Migration Announcement The votes are in, and of the eligible 376 voters, 102 voted to Migrate and 15 voted to archive. With 87% in favor, we’re going ahead with the transition, which will begin tomorrow (3/22) at 6pm Pacific time. Trike will take a snapshot of the database, transfer it to us, and we’ll begin the database import of new material since the last import. At 7pm, the DNS server will point to our server instead of Trike’s, and so people visiting lesswrong.com will see the new site. The import will run in parallel with the site operating, so there won’t be any downtime of the new site. Some things to note: 1. Old links will continue to work, redirecting to the right new URL. We’ll be watching the analytics to notice any failures, and you can also alert us through Intercom (which we’ll shut off a few days into the transition, as problems get fixed). 2. Lesserwrong links will redirect to the same page at lesswrong. 3. We’ll add old karma scores to new karma scores, and may adjust the vote weight algorithm accordingly. 4. Please report issues here. Also, for those of you near Berkeley, we’ll be throwing a launch party on April 7th at 7pm (LW, Facebook).
bd03bfcd-097f-4c84-9dab-5f7a23942e58
trentmkelly/LessWrong-43k
LessWrong
Building Trust in Strategic Settings In the previous post, we saw some different types of claims that we might want software systems to automatically evaluate. Ideally, we'd like our software systems to incorporate accurate and useful information, and disregard inaccurate or irrelevant information.  In this post, I'll describe some factors that can help our systems to build accurate models of the world, using information from their own sensors and other systems, even when those systems were designed for different purposes using different ways to model the world, by different designers with different ways they would like the world to be. Translating Between Ontologies For software systems, all factual claims start out in an ontology that looks like "this is the data that came in from my sensor channels." For a thermostat, that looks like "this is the signal I received on the channel that my thermometer should be hooked up to." An adversary might have chopped off the thermometer, and be feeding my thermostat false signals, but they are feeding those false signals into channels on my thermostat where the thermometer signals are supposed to go. It is a further and separate factual claim that "all my sensors are calibrated and configured correctly, and are feeding signals to their designated channels as expected." For many systems, like household thermostats, it's fine to take sensor data at face value. But other times we'll want our systems to have ways of handling the possibility that they're receiving inaccurate information, or even being actively misled by an adversary. When it's clear from context, I'll drop the qualifier that a software system only receives alleged sensor data on their corresponding channels. But a system can honestly report "this came in on the channel designated for my sensor" without further claiming that this information came from a sensor that is working properly and not being misled. Once we have sensor data, we can apply an epistemology to translate from that sensor data o
3450e7bf-4e00-4820-9d2c-3b7431e1c234
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bratislava Discussion article for the meetup : Bratislava WHEN: 29 September 2014 06:00:00PM (+0200) WHERE: Bistro The Peach, Mariánska 3, Bratislava Pre tých, čo chvíľu nechodili: Bistro the Peach sa asi pred dvoma mesiacmi presunulo o ulicu ďalej, a možno stále nie je poriadne označené, ale je to oproti predajni richmanov na Mariánskej. Alebo sa riaďte podľa google mapy a čísla ulice. Vchod vyzerá tak, že na začiatku pasáže sú krátke schody do prvej časti bistra, vedľa je druhá časť, kde sedíme, a potom je ešte terasa. Máme aj mailovú skupinu, kde zvyčajne posielam informácie o ďalších stretávkach. Ale do konca roka 2014 to bude pravidelne, každé 3 týždne. Discussion article for the meetup : Bratislava
6f57eeb0-e922-4944-9117-7fc7d6ad0da1
StampyAI/alignment-research-dataset/blogs
Blogs
state of my alignment research, and what needs work state of my alignment research, and what needs work --------------------------------------------------- in this post i give an overview of the state of my [AI alignment](ai-doom.html) research, as well as what i think needs to be worked on, notably for people who might want to join my efforts. at this point, my general feeling is that **i'm not very confused about how to save the world**, at least from a top-level view. a lot of people spend a lot of time feeling confused about how to deal with current ML models and what those would optimize for and how to wrangle them — i don't, i ignore these questions and skip straight to how to build some powerful aligned agentic thing to save the world, and **my model of how to do that feels not very confused from a top-level perspective**. it's just that it's gonna take a lot of work and many implementation details need to be filled in. ### threat model i still think the [intelligence explosion](http://intelligence.org/papers/IEM.pdf) caused by [recursive self-improvement](why-timelines-short.html) (RSI) is the most likely way we die — unfortunately, my in-depth thoughts about this seem potentially capability [exfohazard](https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info)ous. this has implications both for what the problem is and what the solution is: for the problem, it implies we don't particularly see things coming and die quickly in a scenario akin to the kind yudkowsky would predict. for the solution, it implies that we might be able to use RSI to our advantage. ### theory of change [it seems to me like coordination is too hard](outlook-ai-risk-mitigation.html) and [decisive-strategic-advantage](https://publicism.info/philosophy/superintelligence/6.html)-enabling capabilities are close at hand. for these reasons, the way that i see the world being saved is one organization on its own building an aligned, singleton AI which robustly saves the world forever. the one way to do this which i'd have any confidence in being [continuously aligned](ai-alignment-curves.html) / not being subject to the [sharp left turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) is by implementing what i call [formal alignment](formal-alignment.html): a formal-goal-maximizing AI, given a formal goal whose maximization actually leads to good worlds, such that more capabilities applied to maximizing it only improves our odds. i believe we can build an RSI system which bootstraps such a scheme, and this can save us the very difficult work of building an accurate and robust-to-capabilities model of the world in the AI, ensuring it shares our concepts, and pointing to those; i explain this perspective in [*clarifying formal alignment implementation*](clarifying-formal-alignment-implementation.html). my current best shot for an aligned formal goal is [QACI](narrative-explanation-qaci.html) (see also [a story of how it could work](narrative-explanation-qaci.html) and [a tentative sketch at formalizing it](rough-sketch-formal-aligned-ai.html)), which implements something [like coherent extrapolated volition](cev-coherent-enough.html) by extending a ["past user"](outer-alignment-past-user.html)'s reflection to be simulated/considered arbitrarily many times, until alignment is solved. ### things that need work i've got something like a plan, and more importantly i've got — i think — a model of formal alignment that lets me do some exploration of the space of similar plans and update as i find better options. obtaining such a model seems important for anyone who'd want to join this general alignment agenda. for someone to help, it'd probly be good for them to grok this model. other than that, work that needs to be done includes: * explore the space of formal alignment, both around the current local guessed-optimum by climbing along the hill of improvements, and by looking for [entirely different plans](formal-alignment.html) such as [this](communicating-successful-alignment.html) * figure out some important pieces of math, such as [locating patterns of bits in solomonoff hypotheses for worlds, running counterfactuals of them](rough-sketch-formal-aligned-ai.html), and drawing up causation between them * finding other potentially useful [true names](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) in case they change the ease-of-implementation of various [formal alignment schemes](formal-alignment.html) * working on formal "inner alignment": what does it take to build a powerful formal-goal maximizing AI which *actually* maximizes its goals instead of doing something else like being overtaken by [demons](word-report-3.html) / [mesa-optimizers](https://www.alignmentforum.org/tag/mesa-optimization) if you (yes, you!) are interested in helping with my agenda, but are worried that you might not be qualified, see [*so you think you're not qualified to do technical alignment research?*](so-you-think-not-qualified-alignment.html). i'm increasingly investigating options to [create an alignment organization](https://twitter.com/carad0/status/1630209317772095489) and/or [do some mentorship in a formal manner](https://twitter.com/carad0/status/1630217085803548672), but i'm also potentially open to doing some informal mentorship *right now*. for any of these purposes, or anything else, **you (yes, you!) are very welcome to get in touch with me** ([alignment discord](https://discord.com/invite/5M8GasMp8p), [twitter](https://twitter.com/carad0/), [lesswrong](https://www.lesswrong.com/users/tamsin-leake), email visible at the top of [my blog](https://carado.moe/)).
cd2dab83-b1a8-44c4-a697-3a131b94dd6a
trentmkelly/LessWrong-43k
LessWrong
It's ok I probably shouldn't be ranting out my thoughts here. but I really want to get these thoughts out of my head and I'm hopeful no one will come across this random piece. Have you ever felt obsessed with something you're building? I did - my last 2 years in college. It's an amazing feeling. It's like riding the ocean current - you have no idea why you're going so fast but the speed feels insane and you're at escape velocity. Everything else fades. It's just this desire to see something manifest itself in the real world. Was it really that idealistic? No. There were 4 of us in a dorm room. You get comfortable with weird glances in your direction and then one day it's become normal. You get used to people thinking you're crazy for staying up everyday 3 am plus and working - many times even during lunch breaks. You get used to not talking to people - it's difficult to explain to them this intense sensation that's driving you and they weren't really the kind of people who'd understand[1]. Hell, I never understood it then either. It was definitely lonely but I was curious. I wanted to know more about who I was. I disappeared under a pseudonym to answer them - who am I? If no one knew me, would I still learn and work on the same things? If no one appreciates me, do I care about my work enough that I'd still do it? If I take away all external driving factors like fame, security, power(except money lol, we need to be able to afford a life) - would I find my truer motivation in life.  It's on the edge of insanity that you see the most incredible views. It's always about people. I just wanted to have fun working with people I enjoy being around on things that matter. Working on open source is different. You're free. You can do everything and anything - the only barriers are your mind. It's a constant battle tbh. You get used to it. You want to battle even better problems the next time. The thrill hits when you end up understanding/building things that looked impossible to yo
5bbc3b1d-5e76-4fbd-9c08-bc0528c16798
trentmkelly/LessWrong-43k
LessWrong
A Gym Gridworld Environment for the Treacherous Turn EDIT: posted here for feedback and discussion. I plan to continue working on different models/environments, so feel free to suggest improvements. (tl;dr: In an attempt to better understand the treacherous turn, I created a gridworld environment where an agent learns to deceive an overseer by adopting an aligned behaviour when weak and takes control after capability gains) ---------------------------------------- At some point in its development, a seed AI may realize that it needs to get rid of its supervisors to achieve its goals. The conception of deception occurs when it conceives that, in order to maximize its chance of taking over, it must begin by exhibiting human-desirable behaviors, before undertaking a treacherous turn when humans are no longer a threat. From the human perspective, the AI would keep on exhibiting desirable behavior, until it eventually appears dangerous, but is already unstoppable. In an attempt to better formalize the treacherous turn without using "loaded concepts", Stuart Armstrong proposed a toy model of the treacherous turn based on "The Legend of Zelda: A Link to the Past ", which looked like this: In the comments, people mentionned how this model helped them "move the topic from the 'science fiction' area to 'I can imagine it happening now'", and seemed interested in an actual Link to the Past Minigame. There have been other simulations of the treacherous turn in the last three years (see for instance gwern's DQN box-pushing robot or Stuart Armstrong's video), but none of them actually simulate a take over where a supervisor is killed. Hence, I decided to give it a try and simulate Stuart Armstrong's Link to the Past toy model. A Gym Gridworld Environment Gym is an open-source toolkit for Reinforcement Learning Environments developed by Open AI. I decided to use this interface to develop the gridworld environment. The github repository with the code, demo, and all the details is here: https://github.com/mtrazzi/gym-alt
4b823515-7a17-4be5-a0cb-3fa769be424d
trentmkelly/LessWrong-43k
LessWrong
Anti-Pascaline agent A putative new idea for AI control; index here. Pascal's wager-like situations come up occasionally with expected utility, making some decisions very tricky. It means that events of the tiniest of probability could dominate the whole decision - intuitively unobvious, and a big negative for a bounded agent - and that expected utility calculations may fail to converge. There are various principled approaches to resolving the problem, but how about an unprincipled approach? We could try and bound utility functions, but the heart of the problem is not high utility, but hight utility combined with low probability. Moreover, this has to behave sensibly with respect to updating.   The agent design Consider a UDT-ish agent A looking at input-output maps {M} (ie algorithms that could determine every single possible decision of the agent in the future). We allow probabilistic/mixed output maps as well (hence A has access to a source of randomness). Let u be a utility function, and set 0 < ε << 1 to be the precision. Roughly, we'll be discarding the highest (and lowest) utilities that are below probability ε. There is no fundamental reason that the same ε should be used for highest and lowest utilities, but we'll keep it that way for the moment. The agent is going to make an "ultra-choice" among the various maps M (ie fixing its future decision policy), using u and ε to do so. For any M, designate by A(M) the decision of the agent to use M for its decisions. Then, for any map M, set max(M) to be the lowest number s.t P(u ≥ max(M)|A(M)) ≤ ε. In other words, if the agent decides to use M as its decision policy, this is the maximum utility that can be achieved if we ignore the highest valued ε of the probability distribution. Similarly, set min(M) to be the highest number s.t. P(u ≤ min(M)|A(M)) ≤ ε. Then define the utility function uMε, which is simply u, bounded between max(M) and min(M). Now calculate the expected value of uMε given A(M), call this Eε(u|A(M)). The age
eeacf7db-c057-4cc8-8983-bbf4ece094dd
StampyAI/alignment-research-dataset/arbital
Arbital
Do-What-I-Mean hierarchy Do-What-I-Mean refers to an aligned AGI's ability to produce better-aligned plans, based on an explicit model of what the user wants or believes. Successive levels of DWIM-ness: - No understanding of human intentions / zero DWIMness. E.g. a Task AGI that is focused on one task being communicated, where all the potential [impacts](https://arbital.com/p/2pf) of that task need to be separately [vetted](https://arbital.com/p/2qq). If you tell this kind of AGI to 'cure cancer', you might need to veto plans which would remove the cancer but kill the patient as a side effect, because the AGI doesn't start out knowing that you'd prefer not to kill the patient. - Do What You Don't Know I Dislike. The Task AGI has a background understanding of some human goals or which parts of the world humans consider especially significant, so it can more quickly generate a plan likely to pass human [vetting](https://arbital.com/p/2qq). A Task AGI at this level, told to cure cancer, will take relatively fewer rounds of Q&A to generate a plan which carefully seals off any blood vessels cut by removing the cancer; because the AGI has a general notion of human health, knows that [impacts](https://arbital.com/p/2pf) on human health are significant, and models that users will generally prefer plans which result in good human health as side effects rather than plans which result in poor human health. - Do What You Know I Understood. The Task AGI has a model of human *beliefs,* and can flag and report divergences between the AGI's model of what the humans expect to happen, and what the AGI expects to happen. - DWIKIM: Do What I Know I Mean. The Task AGI has an explicit psychological model of human preference - not just a list of things in the environment which are significant to users, but a predictive model of how users behave which is informative about their preferences. At this level, the AGI can read through a dump of online writing, build up a model of human psychology, and guess that you're telling it to cure a cancer because you altruistically want that person to be healthier. - DWIDKIM: Do What I Don't Know I Mean. The AGI can perform some basic [extrapolation](https://arbital.com/p/3c5) steps on its model of you and notice when you're trying to do something that, in the AGI's model, some further piece of knowledge might change your mind about. (Unless we trust the DWIDKIM model a *lot*, this scenario should imply "Warn the user about that" not "Do what you think the user would've told you.") - (Coherent) Extrapolated Volition. The AGI does what it thinks you (or everyone) would've told it to do if you were as smart as the AGI, i.e., your decision model is extrapolated toward improved knowledge, increased ability to consider arguments, improved reflectivity, or other transforms in the direction of a theory of normativity. Risks from pushing toward higher levels of DWIM might include: - To the extent that DWIM can originate plans, some portion of which are not fully supervised, then DWIM is a very complicated goal or preference system that would be harder to train and more likely to break. This failure mode may be less likely if some level of DWIM is *only* being used to *flag* potentially problematic plans that were generated by non-DWIM protocols, rather than generating plans on its own. - Accurate predictive psychological models of humans might push the system closer to the [programmer deception](https://arbital.com/p/10f) failure mode being more accessible if something else goes wrong. - Sufficiently advanced psychological models might constitute [mindcrime](https://arbital.com/p/6v). - The human-genie system might end up in the [Valley of Dangerous Complacency](https://arbital.com/p/2s8) where the genie *almost* always gets it right but occasionally gets it very wrong, and the human user is no longer alert to this possibility during the [checking phase](https://arbital.com/p/2qq). - E.g. you might be tempted to skip the user checking phase, or just have the AI do whatever it thinks you meant, at a point where that trick only works 99% of the time and not 99.999999% of the time. - Computing sufficiently advanced DWIDKIM or [EV](https://arbital.com/p/3c5) possibilities for user querying might expose the human user to cognitive hazards. ("If you were sufficiently superhuman under scenario 32, you'd want yourself to stare really intently at this glowing spiral for 2 minutes, it might change your mind about some things... want to check and see if you think that's a valid argument?") - If the AGI was actually behaving like a safe genie, the sense of one's wishes being immediately fulfilled without effort or danger might expose the programmers to additional [moral hazard](https://arbital.com/p/2sb).
d5bdbb46-3944-4824-b294-aed2582e0565
trentmkelly/LessWrong-43k
LessWrong
Interesting discussion of concentration and productivity [link] http://www.johndcook.com/blog/2013/02/04/four-hours-of-concentration/ And since this is the Internet, and facts are involved, our gwern turns up there also.
3656467a-1792-4a45-b6d7-adf6d4b12fce
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
What is "transformative AI"? **Transformative AI** is "\[...\] AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution."[^hy8b4kflu8] The concept refers to the large effects of AI systems on our well-being, the global economy, state power, international security, etc. and not to specific capabilities that AI might have (unlike the related terms [Superintelligent AI](https://www.lesswrong.com/tag/superintelligence) and [Artificial General Intelligence](https://www.lesswrong.com/tag/artificial-general-intelligence)). Holden Karnofsky gives a more detailed definition in [another OpenPhil 2016 post](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/): > \[...\] Transformative AI is anything that fits one or more of the following descriptions (emphasis original): > > * AI systems capable of fulfilling all the necessary functions of human scientists, unaided by humans, in developing another technology (or set of technologies) that ultimately becomes widely credited with being the most significant driver of a transition comparable to (or more significant than) the agricultural or industrial revolution. Note that just because AI systems *could* accomplish such a thing unaided by humans doesn’t mean they *would*; it’s possible that human scientists would provide an important complement to such systems, and could make even faster progress working in tandem than such systems could achieve unaided. I emphasize the hypothetical possibility of AI systems conducting substantial unaided research to draw a clear distinction from the types of AI systems that exist today. I believe that AI systems capable of such broad contributions to the relevant research would likely dramatically accelerate it. > * AI systems capable of performing tasks that currently (in 2016) account for the majority of full-time jobs worldwide, and/or over 50% of total world wages, unaided and for costs in the same range as what it would cost to employ humans. Aside from the fact that this would likely be sufficient for a major economic transformation relative to today, I also think that an AI with such broad abilities would likely be able to far surpass human abilities in a subset of domains, making it likely to meet one or more of the other criteria laid out here. > * Surveillance, autonomous weapons, or other AI-centric technology that becomes sufficiently advanced to be the most significant driver of a transition comparable to (or more significant than) the agricultural or industrial revolution. (This contrasts with the first point because it refers to transformative technology that is itself AI-centric, whereas the first point refers to AI used to speed research on some other transformative technology.) [^hy8b4kflu8]: As defined by [Open Philanthropy's Holden Karnofsky in 2016](https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence-the-philanthropic-opportunity/), and reused by [the Center for the Governance of AI in 2018](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf)
7e469272-7399-4619-913a-0e4a65f5d0c7
trentmkelly/LessWrong-43k
LessWrong
How to Find a Problem There are many ingredients for creativity. One is to have a lot of knowledge saved up, so it can combined in novel ways. One is to actually think about things for five minutes. One is to not flinch away from problems when you find them. But the most important is to have a problem to solve. Or if you're Feynman: > You have to keep a dozen of your favourite problems constantly present in your mind, although by and large they will lay in a dormant state. Importance To have a problem in the back of your mind constantly is somewhat taxing. You need to find a problem which isn't emotionally draining. For this reason the problems need to be important. They don't have to be the most important thing in the world, but they do have to be important enough to you to be somewhat self-motivating. If your day-to-day life involves so many interactions with doorbells that you can think about doorbell design all day, then that's enough. In some ways, I have it easy on this front, which makes it hard for me to give advice. My domain is biological chemistry, so I can do a good job just by pointing myself at the closest ageing-related issue. Ageing is very salient to me, and as I get older I will either succeed or it will become painfully more salient. Size The problems need to be of the correct size. I don't hold the problem "cure ageing" in my head all day. That wouldn't work, because anything could in theory be related to "cure ageing" but there's no clear way for my brain to make the right connection. Instead you have to find a correct balance. The problem "replace lost stem cells in human tissue" is about the right level. There are lots of possible answers, but also the line between ideas I come across and that is not too many steps long. I suspect that the right level is different for different people. This makes my advice somewhat unhelpful. Still, just knowing there is a correct level might be helpful. Perhaps a good idea is to try and keep a few different problems of dif
4fa428e9-c05a-4e0b-a47d-3ffe795c22ee
trentmkelly/LessWrong-43k
LessWrong
Results of LW Technical Background Survey See results here. The main goal of the survey was to provide info for authors about their target audience, so here's a high-level overview toward that end: * The average respondent is some kind of professional programmer, with an undergrad degree (or equivalent) in CS. * Most people have seen at least some economics and probability, but not at the level of a undergrad degree. * Almost everyone knows calculus, but linear algebra or differential equations will likely be lost on at least ~25% of respondents. * There are substantial zero-knowledge and high-knowledge counts for most areas. * About half of respondents had read the sequences in their entirety. Here are charts of the responses to each question. I strongly recommend looking at them directly rather than just taking my summary at face value. As always, remember this is an opt-in survey without any sort of verification of responses, so take everything with a grain of salt. One interesting note: we had a handful of respondents declaring very high skill levels (Nobel-level economists, Turing-level computer scientists, or primary developers of popular software). I'd personally be interested to hear what exactly those people work on, especially if they're willing to occasionally field questions on their area of expertise. All y'all should leave a comment or something. Actually, I'm curious what everyone works on, especially specialties for all the researchers. Feel free to leave a quick comment, especially if you're able and willing to occasionally field questions in your area of expertise.
2616c899-aaa8-4728-ad94-01424382e9d7
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"My deconversion from Christianity had a large positive impact on my life. I suspect it had a small positive impact on the world, too. (For example, I no longer condemn gays or waste time and money on a relationship with an imaginary friend.) And my deconversion did not happen because I came to understand the Bayesian concept of evidence or Kolmogorov complexity or Solomonoff induction. I deconverted because I encountered some very basic arguments for non-belief, for example those in Dan Barker's Losing Faith in Faith. Less Wrong has at least two goals. One goal is to raise the sanity waterline. If most people understood just the basics Occam's razor, what constitutes evidence and why, general trends of science, reductionism, and cognitive biases, the world would be greatly improved. Yudkowsky's upcoming books are aimed at this first goal of raising the sanity waterline. So are most of the sequences. So are learning-friendly posts like References & Resources for LessWrong. A second goal is to attract some of the best human brains on the planet and make progress on issues related to the Friendly AI problem, the problem with the greatest leverage in the universe. I have suggested that Less Wrong would make faster progress toward this goal if it worked more directly with the community of scholars already tackling the exact same problems. I don't personally work toward this goal because I'm not mathematically sophisticated enough to do so, but I'm glad others are! Still, I think the first goal could be more explicitly pursued. There are many people like myself and jwhendy who can be massively impacted for the better not by coming to a realization about algorithmic learning theory, but by coming to understand the basics of rationality like probability and the proper role of belief and reductionism. Reasons for Less Wrong to devote more energy to the basics Such efforts to spread the basics will have a short-term impact on more people than will efforts toward Friendly AI, and these impacted people will in turn impact others, hopefully for the better. Some LWers may feel they have little to contribute because they aren't masters of Solomonoff induction or algorithmic learning theory. But they will be able to contribute to raising the sanity waterline by spreading the basics of rationality. Providing more basic resources will attract a wider base of readers to Less Wrong, leading to (1) more new rationalists and (2) more donations to SIAI, for solving the Friendly AI problem. Even for experienced rationalists, it can be easy to forget the basics at times. Humans are not naturally rational, and revert to pre-rationality rather quickly without ongoing training and practice. How to do it Let me put some meat on this. What does more focus on the basics look like? Here are some ideas: The sequences are great, but some people are too busy or lazy to read even those. Some of the sequences could be summarized into single posts crafted so as to have no prerequisites. These posts could be linked widely, and entered in relevant blog carnivals. There is another huge community who will watch a 10-minute video, but will not read a short post. So the YouTube lectures are a great idea. But they could be improved. As of today, three of the videos show a presenter against a whiteboard. To make this work well requires lots of resources: (1) a good camera, (2) a shotgun or lavavlier microphone, (3) a teleprompter, and (4) an experienced and enthusiastic presenter. That's hard to do! But videos in the familiar PowerPoint style or the Khan Academy style are easier to do well. All it requires is some free presentation software, free screen capture software and a $70 high-quality USB mic like the Blue Snowball. This approach would also allow more people to participate in making videos on the basics of rationality. Sometimes, a basic concept of rationality will only "click" with somebody if presented in a certain way. Some will need a story that illustrates the concept. There are some of these on Less Wrong already, or in something like Harry Potter and the Methods of Rationality, but there could be more. For others, perhaps a Cartoon Guide to Bayes' Theorem or a Cartoon Guide to Reductionism would make it "click." Those who are more ambitious might attempt to create an animation explaining some core rationalist concept, ala this visualization of special relativity. Write "Introduction to X" or "How to Use X" posts. Keep developing the wiki, obviously. Develop a rationality workbook. Finally, to avoid the "somebody really ought to do something about this" syndrome, let me tell you what I am going to do on this front. I will be helping to raise the sanity waterline by first raising the scholarship waterline. More specifically, I will be summarizing some of my favorite university textbooks into 50-page blog posts. This will provide people with free, substantive, and easy-to-use tutorials on important subjects. I published the first such tutorial today, posted to my own site since I don't know yet whether 50-page posts are wanted on Less Wrong: Cognitive Science in One Lesson." The above is a chunk of text from a post created by a user on the website LessWrong. Explain this as if I never finished highschool? The author of this post used to be a Christian but decided to leave the religion because of some basic arguments that made him question his faith. He mentions a website called LessWrong, which has two main goals. The first goal is to help people think more logically and rationally. The author believes that if more people understood basic principles like Occam's razor (the simplest explanation is usually the correct one), what counts as evidence, the general trends of science, reductionism (breaking complex things down into simpler parts), and cognitive biases (ways our brain can trick us into thinking illogically), the world would be a better place. The second goal of LessWrong is to gather smart people to work on a problem called Friendly AI. This is about making sure that if we ever create super intelligent machines, they'll be friendly to humans and not harm us. The author admits he's not smart enough to help with this, but he's glad others are working on it. The author then argues that LessWrong should focus more on teaching these basic principles of rational thinking to more people. He thinks this would help more people in the short term and could also attract more people to the site, which could lead to more donations for their work on Friendly AI. He suggests several ways to do this, like summarizing complex ideas into simpler posts, making short videos, using stories or cartoons to explain concepts, writing introductory posts on different topics, improving the website's wiki, and creating a workbook on rational thinking. Finally, the author says he's going to contribute by summarizing his favorite university textbooks into shorter blog posts. He thinks this will help people learn important subjects more easily. He's already started doing this and has posted his first tutorial on his own website. Can you elaborate on how the author's deconversion from Christianity influenced his perspective on rational thinking and his involvement with LessWrong? The author's deconversion from Christianity seems to have been a significant turning point in his life, which led him to reevaluate his beliefs and how he forms them. He mentions that his departure from the faith was not due to complex theories or concepts, but rather basic arguments that challenged his religious beliefs. This experience likely sparked an interest in rational thinking and logical reasoning, as these are the tools he used to question and ultimately leave his faith. His involvement with LessWrong, a community dedicated to refining the art of human rationality, appears to be a natural extension of this personal journey. The author sees value in the goals of LessWrong, both in raising the general level of rational thinking (the "sanity waterline") and in addressing complex problems like Friendly AI. The author's deconversion also seems to have influenced his perspective on the potential impact of rational thinking. He mentions that his departure from Christianity had a positive impact on his life and possibly on the world, as he no longer holds certain negative views or spends resources on religious practices he now sees as unproductive. This suggests that he sees rational thinking not just as a personal tool, but as a means to improve society as a whole. Moreover, his experience of deconversion through basic arguments highlights for him the importance of making rational thinking accessible to everyone, not just those who can understand complex theories. This is reflected in his call for LessWrong to focus more on teaching the basics of rationality, and his own commitment to summarizing university textbooks into shorter, more digestible blog posts. Considering the author's personal journey from faith to rational thinking and his subsequent involvement with LessWrong, how might his approach to promoting rationality differ if he had deconverted from a different religion or philosophical belief system? The author's approach to promoting rationality is deeply influenced by his personal journey from Christianity to atheism. His experience with Christianity, a religion that is heavily based on faith and personal belief, likely made him appreciate the value of rational, evidence-based thinking. However, if he had deconverted from a different religion or philosophical belief system, his approach might have varied. For instance, if he had deconverted from a religion or philosophy that places a high value on questioning, debate, and logical reasoning (like Buddhism or certain schools of Hindu philosophy), he might already have been familiar with some aspects of rational thinking. In this case, his focus might have been more on refining these skills and applying them in new contexts, rather than on introducing them from scratch. If he had deconverted from a religion that places a high emphasis on community and communal decision-making (like certain forms of Judaism or indigenous belief systems), he might have been more focused on promoting rational thinking within communities and groups, rather than just at the individual level. Alternatively, if he had deconverted from a philosophy that emphasizes skepticism and questioning (like existentialism or postmodernism), he might have been more focused on the limits of rational thinking and the importance of acknowledging uncertainty and ambiguity. In any case, the author's personal journey would likely have shaped his understanding of rational thinking and how best to promote it. His approach would likely reflect the particular strengths and weaknesses of the belief system he left, as well as the lessons he learned during his deconversion process.
88fe198e-ed88-45a0-913a-bbcdb405c9c5
trentmkelly/LessWrong-43k
LessWrong
Meetup : Madison Discussion article for the meetup : Madison WHEN: 02 August 2011 06:00:00PM (-0500) WHERE: 1225 Regent St. Madison, WI With the intent of being the first of many regular meetups, let's get together on Tuesday. I plan to conduct a few exercises on brainstorming, including the "alternatives procedure" that Anna Salamon taught at the rationality minicamp. Mostly, though: discussion, meeting people, and assigning names, faces and personalities to usernames. If you'd like to come but this time doesn't work for you, say something. I plan on doing this at least once every two weeks, so we should try to settle on a time that accommodates everyone in the long run. Discussion article for the meetup : Madison
f9b99de0-053b-4a4f-859b-a458a84d04a3
trentmkelly/LessWrong-43k
LessWrong
Disentangling inner alignment failures TL;DR: This is an attempt to disentangle some concepts that I used to conflate too much as just "inner alignment". This will be old news to some, but might be helpful for people who feel confused about how deception, distributional shift, and "sharp left turns" are related. I first discuss them as entirely separate threat models, and then talk about how they're all aspects of "capabilities are more robust than alignment". ---------------------------------------- Here are three different threat models for how an AI system could very suddenly do catastrophic things: 1. Deception: The AI becomes deceptively aligned at some point during training, and then does what we want only for instrumental reasons (because it wants to be deployed). Once we deploy, it starts pursuing its actual objective, which is catastrophic for humans. 2. Distributional shift: The AI behaves well during training, perhaps using some messy set of heuristics and proxy objectives. We deploy, and there's distributional shift in the AI's inputs, which leads to the model's proxies no longer being aligned with human values. But it's still behaving capably, so we again get catastrophic outcomes. 3. Capability gains/sharp left turn: At some point (while training or in deployment), the AI becomes much more capable, including at a bunch of things we didn't explicitly train for. This could happen quite suddenly, e.g. because it learns some crucial general skill in relatively few gradient steps, or because it starts learning from something other than gradients that's way faster. The properties of the AI that previously ensured alignment are too brittle and break during this transition. Note that these can be formulated as entirely distinct scenarios. For example, deception doesn't require a distributional shift[1] nor capability gains; instead, the sudden change in model behavior occurs because the AI was "let out of the box" during deployment. Conversely, in the distributional shift scenario, the model
39a98cca-5809-44ec-ba2a-8d7347eea691
trentmkelly/LessWrong-43k
LessWrong
Towards learning incomplete models using inner prediction markets This post is a short informal summary of a new idea I'm starting to work on, which combines my thinking abouts "incomplete models" with Scott's logical inductors. Motivation Before, I speculated that generalizing Bayesian inference to include incomplete models will allow solving the grain of truth problem in a way more satisfactory than what was achieved so far. More generally, this would allow getting performance guarantees for agents in environments that are as complex or more complex than the agent. Now, such performance guarantees are already known for learning algorithms suited to the non-realizable settings. However, as Jessica noted here, those methods don't address long-term planning due to the scarcity of training data. On the other hand, Bayesian methods do allow long-term planning: if the environment is realizable (i.e. absolutely continuous w.r.t. the prior), on-policy merging of opinions will occur at a rate that doesn't depend on the utility function. This means that for a fixed environment and sufficiently slowly falling time discount, the agent will be able to form effective long-term plans, at least as long on-policy forecasting is sufficient. Of course, realistic settings require off-policy forecasting, which requires some exploration. If we want global optimality in policy space, we would have to explore for an entire horizon which means long-term planning fails again. However, I think that satisfactory weaker optimality guarantees can be achieved by more conservative exploration, especially when "consulting a (human) expert" is an available form of "exploration". This advantage of Bayesian agents is only applicable in the realizable case, which is an unrealistic assumption. However, the inclusion of incomplete models would bring the advantage into the non-realizable case: the environment might be arbitrarily complex, but as long as it conforms to some simple incomplete model, this model can be learnt quickly and exploited for long-term planni
28484141-d88c-413d-9254-7259591ba63a
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Anti-'FOOM' (stop trying to make your cute pet name the thing) *Notes/basis:* This is kind of a short-form post in style but I think it's important enough to put here. Obviously let me know if someone else said this better Summary ------- Formal overly-intellectual academese is bad. But using your 'cute' inside joke name for things is potentially worse. It makes people cringe, sounds like you are trying to take ownership of something, and excludes people. Use a name that is approachable but serious. The problem. ------------ Where did the term 'FOOM' come from, to refer to AGI risk? I asked GPT4: > > [!ai] AI > The term 'foom' was coined by artificial intelligence researcher and author Eliezer Yudkowsky in his 2008 book titled "The Sequences". Yudkowsky used the term to refer to a hypothetical scenario where an artificial general intelligence (AGI) rapidly and exponentially improves its own intelligence, leading to an uncontrollable and potentially catastrophic outcome for humanity. The term 'foom' is a play on the word 'boom', representing the sudden and explosive nature of AGI development in this scenario. > > > Another example: 'AI-not-kill-everyone-ism' Analogies to fairly successful movements: ----------------------------------------- * Global warming was not called ''Roast", and the movement was not called "anti-everyone-burns-up-ism" * Nuclear holocaust was not called "mega-boom" * Anti-slavery was not called ... (OK I won't touch this one) How well has the use of cute names worked in the past? ------------------------------------------------------ I can't think of any examples where they have caught on in a positive way. The closest I can think of are ### "Nudge" (by Richard Thaler?) ... to describe choice-architecture interventions; - My impression is that the term 'nudge' got people to remember it but made it rather easy to dismiss - others in that space have come up with names that caught on less well I think (like "sludge"), which also induce a bit of cringe ### "Woke" I think this example basically speaks for itself. Tea-Party movement ------------------ This goes in the opposite direction perhaps (fairly successful), but I still think it's not quite as cringeworthy as FOOM. The term 'tea party' obviously has a long history in our culture, especially the "Boston Tea Party. What else? ---------- I asked GPT4 > > when have social movements used cute 'inside joke' names to refer to the threats faced? > > > The suggestions are not half as cute or in-jokey as FOOM: Net Neutrality, The Umbrella Movement, Extinction Rebellion (XR), Occupy Wall Street (OWS) I asked it to get cuter... [[1]](#fn-n82EcfTymwvxH9RYQ-1) Prodding it further... Climategate, Frankenfoods, Slacktivism ... also not so inside-jokey nor as cringeworthy IMO. Prodding it for more cutesy more inside-jokey yields a few terms that barely caught on, or didn't characterize the movement or the threat as a whole.[[2]](#fn-n82EcfTymwvxH9RYQ-2) --- 1. the best it came up with was "Treehuggers: This term, originally used pejoratively to describe environmental activists, has been reclaimed by the environmental movement as a playful and positive label for those who care about protecting nature and the planet." But this goes in the opposite direction: reclaiming a term, not pushing it forward. It also suggested Ploggers, Plarn, Solartopia, and Ecotarians. All either not as cringey, or not something I've ever heard of. [↩︎](#fnref-n82EcfTymwvxH9RYQ-1) 2. "Nurdles: In environmental activism, "nurdles" are small plastic pellets used as raw material in the production of plastic products." OK but I've never heard of these, and they are not the focus of the movement. " "Vaxxies": This playful term is a blend of "vaccine" and "selfie" ... way to celebrate and promote vaccination efforts, turning the act of getting vaccinated into a social media moment. " [↩︎](#fnref-n82EcfTymwvxH9RYQ-2)
f0555357-9727-4b25-932b-7cd8ea473915
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Response to “Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers” ![Watercolour image of a frontier AI lab, bathed in warm sunlight. People are working together to discuss pausing of frontier AI research](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/JCGAdrrr3ePXHEzqc/auupxtjskadvdg6tee2p)Generated by DALL-E 3*Note: this is a x-post from my blog,* [*Thoughts on AI*](https://www.matthewwearden.co.uk/) *, where I discuss a variety of topics in AI governance, particularly corporate governance.* **Introduction** ================ The corporate governance team at the Centre for the Governance of AI recently published a great paper, [“Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers”](https://www.governance.ai/research-paper/coordinated-pausing-evaluation-based-scheme), authored by Jide Alaga and Jonas Schuett. The following post contains a set of responses and comments to the paper - all such responses are based on personal insights and opinions that I have on the content that I hope add to the conversation. Any negative tone that may come across in the post does not represent my feelings on the paper overall - I think it is a fantastic, practical piece of research that I hope is read by policymakers both in frontier AI labs and governments. **A Note on Good Intentions** ----------------------------- It may appear that some of my comments take a rather pessimistic view of frontier AI labs and their interests. In truth, I believe that many of these labs are full of individuals genuinely trying to do the right thing, who are aware of the risks they are dealing with. In my mind, this good faith should be given to almost any individual working at a frontier lab, but it absolutely should not be extended to the labs themselves. Any such organisation exists in an environment that strongly rewards certain types of decision making, and a collection of entirely justifiable, well-meant decisions can still lead to very bad outcomes. Good governance should not rely on the good intentions of organisations, but instead seek to make the exercising of those good intentions as likely as possible to align with good outcomes, whilst making any bad intentions as painful and cumbersome as possible to execute on.   **Limitations of the Mutual Auditor Model** =========================================== The main area where my current opinions disagree with those of the authors are on the efficacy and feasibility of the Mutual Auditor model in the paper. There are two key disagreements presented below. **It is unlikely that there will be a single auditor for the industry** ----------------------------------------------------------------------- Many of the strengths of the mutual auditor model lie in the coordination that is possible due to all frontier AI labs using a single auditor. This is a scenario that I believe is very unlikely to exist in practice, primarily because auditing is a profitable industry, with both demand and space for multiple organisations to enter the market.   Unless there are legal requirements to use one auditor, a frontier lab will seek to find any organisation that can a) evaluate their systems sufficiently to demonstrate they have found a reasonable auditor, and b) be loose enough with their audits that the lab’s own opinions on the safety of their models can heavily influence the outcome of the audit. This incentive mechanism has been shown through many other industries to be enough to attract new market entrants, and there is no compelling reason I can find to believe why this wouldn’t be true of frontier AI research. Amongst others, the Big Four are known to build teams of experts ready to enter new auditing markets in a variety of technical fields.   Given that there is no longer one single auditor, the coordinated pausing model breaks down. Agreements on the terms of coordinated pausing would need to be established between auditing firms, and there is no reason to assume that these would be sufficiently cautious to prevent the severe risk scenarios that the paper is intending to address. In such a world, a new race to the bottom may well begin between the auditors as they seek to attract firms away from their competitors.   There are two things that I can currently imagine would change my mind about this: * If I were to see examples of other industries that were largely audited by one or two firms, I would be much more optimistic that the single auditor model is feasible * If there were to be a set of practical and sound policies that could be implemented between multiple auditing firms, I would be much more convinced that the mutual auditor model could still work with multiple auditors in the market. **Auditors implementing pauses face too many legal risks** ---------------------------------------------------------- Any auditor that is asking frontier labs to pause model deployments, or even research, will face significant legal challenges from their clients if they attempt to enforce such policies. Organisations that attempt to restrict the competitiveness of private firms without very strong grounds for doing so may be held liable for the loss of profit they cause. Any pause will be met with strong challenges for why it was started, as well as challenges against the conditions for its ending. This can be seen often in the finance industry, with lengthy and expensive legal battles ensuing. This disincentivises an auditor to implement such pauses, decreasing their efficacy significantly.   There are significant legal questions to be answered here, and I am not qualified to give opinions here. I would be enthusiastic to see research demonstrating that this issue is less important than I currently believe it to be.   **Limitations of the Voluntary Pausing and Pausing Agreement Models** ===================================================================== I would first like to state that I agree with the authors of the paper that both the Voluntary Pausing and Pausing Agreement models are valuable intermediate steps to longer term solutions. However, there are a couple of limitations of the models that I don’t believe were addressed in the original paper and I would like to mention here.  **Private Deployments** ----------------------- One issue with both of these models is that they do not sufficiently address the risks posed by the private deployments of dangerous models to individual clients of the frontier labs. As such deals are likely to be considered confidential, proprietary information, the enforcers of pauses in either model (the public and other AI labs) are unlikely to be aware of such deployments. Though I do not have financial information to back the following claim up, I think that such private deployments are likely to constitute major proportions of the revenue of frontier labs. As such, the incentives to make risky decisions are higher in such deals.   These risks would be less salient to me if regulations were introduced to force the disclosure of such private deployments to the public, or data showed that private deployments constituted much smaller proportions of revenue than I had imagined. **Contractual Penalties** ------------------------- Similar to the point raised above about auditors enforcing penalties and the anti-trust issues described in the paper, I have strong concerns over the efficacy of contractual penalties as they are described in the Pausing Agreements model. My main concern is that there are very few business incentives for AI firms to maintain and uphold such contractual agreements with their competitors. The point I made at the start of this post about good intentions is relevant here - as much as everyone at these companies *wants* to do the right thing, this is too often overridden by the nature of the environment they operate in - organisations that are at all driven by the incentives of capital markets cannot be relied on to self regulate when the dangers are so great. Most likely, disagreements on the terms of any such contractual penalties will inevitably arise and they will be quietly dropped to avoid reproach by authorities enforcing antitrust laws.   I am more optimistic about a model where enforcement comes through pausing or revoking memberships to industry bodies, where the enforcer is a related but separate entity to any of its individual members. Work to establish and deepen the relationships between a frontier AI industry body and its members would be valuable.   **Observations on Key Research Areas** ====================================== I am strongly encouraged by the work in this paper that there are feasible options for implementing a coordinated pausing policy in the future. From the paper, I see a few key research areas that require prioritisation before any such policy could be implemented, which I thought were worth listing below. **Model Evaluations** --------------------- It hardly needs pointing out, but development of effective model evaluation methodologies are a fundamental requirement to the development of any pausing policies. For this and many other reasons, evaluations research and threshold definitions are a must for the industry. **Model Similarity and Research Relevance** ------------------------------------------- For any pauses to be implemented, measures of model similarity must be created. Without them, it will be impossible to define what work at labs needs to be paused. This is probably the single largest bottleneck besides model evaluation research to any such policies being implemented. **Legal Research** ------------------ Any enforcement of a pause is likely to be met with legal challenge, even targeted against regulators. Research into relevant case studies from other industries, as well as research into the development of strongly binding contracts will be extremely valuable going into the future. **Incident Reporting Schemes** ------------------------------ In order for coordinated pausing strategies to work successfully, risk incidents must be correctly identified and reported to relevant organisations. Work to develop practical incident reporting, whistleblowing and safe harbour schemes should be developed as a priority to enable this. **Model Registers and Disclosure Requirements** ----------------------------------------------- One key requirement for the success of a pausing policy is the development of model registers. These registers should categorise models by their capabilities, architecture and deployment, and are ideally coordinated by regulators that can enforce disclosure requirements, especially at the pre-training and pre-deployment stages. Specific, practical policy proposals for disclosure and notification schemes should be considered a high priority, as should work to build infrastructure for a register of models and their capabilities. **Open-Sourcing Regulation** ---------------------------- Once models become open sourced, work done to restrict their usage will become almost entirely useless. Further research into policy proposals to prevent the open sourcing of frontier models will be important for ensuring that the regulation of proprietary models remains relevant. **Corporate Governance** ------------------------ For pauses to effectively be implemented within organisations, strong corporate governance structures need to be developed. Without this, it is possible that research may be conducted despite the formal position of the company, potentially still leading to dangerous outcomes.
320eb546-9b83-4eb2-91e2-d0f3e31e26eb
trentmkelly/LessWrong-43k
LessWrong
Somerville Porchfest Thoughts This Saturday was Porchfest in Somerville, an annual festival where musicians around the city play on their porches and people wander around listening. As in the past few years Cecilia and I ( Kingfisher) played for contra dancing: Harris Lapiroff called: If anyone has pictures of videos from the set, I'd love to see them as well. Overall, we had a great time! It was fun seeing so many friends, and the kids did some very good business with their bake sale. This was the second year we'd gotten permission to close the street to traffic, which we really needed. I applied on Monday 2024-03-18 and it was approved on Wednesday 2024-05-08 for an event on Saturday 2024-05-11. This was cutting it a bit close; among other things the permit says you need to give all abutters at least one week written notice, which wasn't possible. My guess is the city wanted to decide all together which streets would be closed, and wanted to do that close to the time so they'd have as much information as possible? There were several times that cars came through the closed street, mostly in groups. They moved appropriately slowly, but they're just very bulky which was tricky with the dancing. There were barriers up, but maybe with how many streets were closed the drivers felt like they had no other possible routes? Even though Morrison tends to have a lot of bands and people, this year it was far from the most crowded place. That would have been Aberdeen, where Guster played. Over on Reddit /u/mais318 took a drone up (possibly unsafely/illegally?): I did hear a lot of mixed things afterwards about crowd and traffic management. Some ideas for future iterations: * Close more streets. If bands that tend to draw a large crowd on a street, put in a block party request. * Pick some main streets to keep open [1] and ask people not to perform on those streets. Strongly discourage driving on non-main streets, even the ones that aren't closed. * Decide farther in advance which
febe03b1-8402-42b8-bbac-f546f9d40943
trentmkelly/LessWrong-43k
LessWrong
Three of my beliefs about upcoming AGI [Epistemic Status: Semi-informed conjecture. Feedback is welcome.] At EAG last month, I got the advice to develop my own mental model of what general/transformative/superintelligent AI will look like when it gets here, and where I think the risks lie. This week I’ll be writing a series of shorter posts thinking out loud on this. I want to start by stating three of my intuitions about what artificial general intelligence (AGI) will look like and how safe it will be, which I’ll phrase as conjectures: 1. Conjecture A: Large Language Models (LLMs) will be central to (our first) Artificial General Intelligences (AGIs) 2. Conjecture B: LLM capabilities and safety are heavily dependent on their prompt and the way they are invoked (in addition to the model). 1. By “the way they are invoked” I mean the call-execute loop external to the LLM itself, including whether its outputs can be interpreted and run by an external tool. Examples of different ways they could be invoked include “GPT-4 via ChatGPT”, “GPT-4 via ChatGPT plus plugins”, and “GPT-4 in ARC’s testing environment where it can ‘execute code, do chain-of-thought reasoning, and delegate to copies of itself’”. 3. Conjecture C: LLMs don’t behave like expected utility maximizers for tasks they are given. 1. Caveats/clarifications: LLMs are obviously choosing next tokens based on minimizing a function (for pre-trained models the function is an approximation of “predictive loss on the next token”), but this doesn’t map clearly a utility calculation for a task it is given (e.g. “make paperclips”). That said, LLMs might be capable of role-playing an expected utility maximizer. I think I’m reasonably confident in these conjectures (90+% belief under appropriate formalization). I see conjecture A as very different than conjectures B and C, and I think B and C have important safety implications that I’ll discuss in a later post. But I am a bit concerned that the properties I am ascribing to LLMs is mostly true of
372b44d9-141f-49b2-a131-7bb6604a1b8b
trentmkelly/LessWrong-43k
LessWrong
Am I obligated to reread the Book of Mormon? In this comment thread, I stated that > I have read the Book of Mormon in the past, but I hereby precommit to reading it again and "searching in my heart" (I have a copy on my bookshelf) if you can demonstrate that my skepticism regarding your evidence is unwarranted. In the resulting thread five evidences were given, and some back-and-forth occurred. Being myself somewhat biased, I feel unfit to judge if Arandur actually showed that a non-Mormon's skepticism is unwarranted. So you, who wish to become stronger, I ask: please comment below whether or not you believe the proposition was satisfied. Remember! This is not a vote on whether the evidence is factually correct or not! Remember! This is not a chance to anonymously signal your agreement or disagreement with the LW hive mind! Remember! If the sky is green, wish to believe that the sky is green! I don't know what else I can say to forestall thread hijacking.
c7cee79e-a2c5-40d1-b8d8-42634a0eba19
trentmkelly/LessWrong-43k
LessWrong
Your intuitions are not magic People who know a little bit of statistics - enough to use statistical techniques, not enough to understand why or how they work - often end up horribly misusing them. Statistical tests are complicated mathematical techniques, and to work, they tend to make numerous assumptions. The problem is that if those assumptions are not valid, most statistical tests do not cleanly fail and produce obviously false results. Neither do they require you to carry out impossible mathematical operations, like dividing by zero. Instead, they simply produce results that do not tell you what you think they tell you. As a formal system, pure math exists only inside our heads. We can try to apply it to the real world, but if we are misapplying it, nothing in the system itself will tell us that we're making a mistake. Examples of misapplied statistics have been discussed here before. Cyan discussed a "test" that could only produce one outcome. PhilGoetz critiqued a statistical method which implicitly assumed that taking a healthy dose of vitamins had a comparable effect as taking a toxic dose. Even a very simple statistical technique, like taking the correlation between two variables, might be misleading if you forget about the assumptions it's making. When someone says "correlation", they are most commonly talking about Pearson's correlation coefficient, which seeks to gauge whether there's a linear relationship between two variables. In other words, if X increases, does Y also tend to increase. (Or decrease.) However, like with vitamin dosages and their effects on health, two variables might have a non-linear relationship. Increasing X might increase Y up to a certain point, after which increasing X would decrease Y. Simply calculating Pearson's correlation on two such variables might cause someone to get a low correlation, and therefore conclude that there's no relationship or there's only a weak relationship between the two. (See also Anscombe's quartet.) The lesson here, then, is
04f8e6a0-7c60-4c5e-b4c4-432aff415cd2
trentmkelly/LessWrong-43k
LessWrong
[Link] Tetlock on the power of precise predictions to counter political polarization The prediction expert Philip Tetlock writes in New York Times on the power of precise predictions to counter political polarization. Note the similarity to Robin Hanson's futarchy idea. > IS there a solution to this country’s polarized politics? > > Consider the debate over the nuclear deal with Iran, which was one of the nastiest foreign policy fights in recent memory. There was apocalyptic rhetoric, multimillion-dollar lobbying on both sides and a near-party-line Senate vote. But in another respect, the dispute was hardly unique: Like all policy debates, it was, at its core, a contest between competing predictions. > > Opponents of the deal predicted that the agreement would not prevent Iran from getting the bomb, would put Israel at greater risk and would further destabilize the region. The deal’s supporters forecast that it would stop (or at least delay) Iran from fielding a nuclear weapon, would increase security for the United States and Israel and would underscore American leadership. > > The problem with such predictions is that it is difficult to square them with objective reality. Why? Because few of them are specific enough to be testable. Key terms are left vague and undefined. (What exactly does “underscore leadership” mean?) Hedge words like “might” or “could” are deployed freely. And forecasts frequently fail to include precise dates or time frames. Even the most emphatic declarations — like former Vice President Dick Cheney’s prediction that the deal “will lead to a nuclear-armed Iran” — can be too open-ended to disconfirm. > > // > > Non-falsifiable predictions thus undermine the quality of our discourse. They also impede our ability to improve policy, for if we can never judge whether a prediction is good or bad, we can never discern which ways of thinking about a problem are best. > > The solution is straightforward: Replace vague forecasts with testable predictions. Will the International Atomic Energy Agency report in December that Iran h
141df73a-a7c4-47b4-b9ea-62580c09b899
trentmkelly/LessWrong-43k
LessWrong
Alan Kay - Programming and Scaling (Republishing the comment linked by Said Achmiz below.) What Is This Talk About This talk is about several things. Software ‘Engineering’ - For the amount of functionality we get out of systems, we seem to have to write a lot of code. Alan Kay tries to give the viewer a sense of the vast scale of how much code we’re writing, and then asks the open question of whether that’s how much code we really need to write to get that much functionality. There’s good evidence it isn’t, including both research Kay has done at the Viewpoints Research Institute, along with a biological metaphor showing how the machines we build don’t have the necessary capability to let us easily modify them without understanding the details of their every inner component. Epistemology - This is the talk where Kay gives his famous ‘perspective is worth 80 IQ points’ quip. Kay discusses how the sheer genius of someone like Leonardo merely allowed him to dream about vehicles he couldn’t build without a working engine. He then goes into a short analysis of how many new useful resources exist in software when it comes to outlook, versus the scant progress that has been made in utilizing them. He compares the situation with computer science to that of music, where only a tiny fraction of a fraction of all possible musical theory is actually used to produce popular music. Complexity versus Complication - Kay theorizes that there is a complexity curve and a complication curve, and that the curve for complication (i.e, inefficient modeling) grows much faster than the curve for complexity. To the point where the right abstraction is the difference between humans being able and unable to manage a system. Tinkering versus Mathematics versus Science versus Engineering - Kay provides a taxonomy of human artifact production, which distinguishes between several categories of endeavor that are not usually well defined. For example, he points out that mathematics had to be invented, and before it people used
6fddea43-b1b5-494d-9653-207962433149
trentmkelly/LessWrong-43k
LessWrong
On motivations for MIRI's highly reliable agent design research (this post came out of a conversation between me and Owen Cotton-Barratt, plus a follow-up conversation with Nate) I want to clarify my understanding of some of the motivations of MIRI's highly reliable agent design (HRAD) research (e.g. logical uncertainty, decision theory, multi level models). Top-level vs. subsystem reasoning I'll distinguish between an AI system's top-level reasoning and subsystem reasoning. Top-level reasoning is the reasoning the system is doing in a way its designers understand (e.g. using well-understood algorithms); subsystem reasoning is reasoning produced by the top-level reasoning that its designers (by default) don't understand at an algorithmic level. Here are a few examples: AlphaGo Top-level reasoning: MCTS, self-play, gradient descent, ... Subsystem reasoning: whatever reasoning the policy network is doing, which might involve some sort of "prediction of consequences of moves" Deep Q learning Top-level reasoning: the Q-learning algorithm, gradient descent, random exploration, ... Subsystem reasoning: whatever reasoning the Q network is doing, which might involve some sort of "prediction of future score" Solomonoff induction Top-level reasoning: selecting (Cartesian) hypotheses by seeing which make the best predictions Subsystem reasoning: the reasoning of the consequentialist reasoners who come to dominate Solomonoff induction, who will use something like naturalized induction and updateless decision theory Genetic selection It is possible to imagine a system that learns to play video games by finding (encodings of) policies that get high scores on training games, and combining encodings of policies that do well to produce new policies. Top-level reasoning: genetic selection Subsystem reasoning: whatever reasoning the policies are doing (which is something like "predicting the consequences of different actions") In the Solomonoff induction case and this case, if the algorithm is run with enough computation, the sub
01e9bd98-bc58-464c-9d40-ce066a1d6fd4
trentmkelly/LessWrong-43k
LessWrong
Objections to Coherent Extrapolated Volition > In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. — Eliezer Yudkowsky, May 2004, Coherent Extrapolated Volition Foragers versus industry era folks Consider the difference between a hunter-gatherer, who cares about his hunting success and to become the new tribal chief, and a modern computer scientist who wants to determine if a “sufficiently large randomized Conway board could turn out to converge to a barren ‘all off’ state.” The utility of the success in hunting down animals and proving abstract conjectures about cellular automata is largely determined by factors such as your education, culture and environmental circumstances. The same forager who cared to kill a lot of animals, to get the best ladies in its clan, might have under different circumstances turned out to be a vegetarian mathematician solely caring about his understanding of the nature of reality. Both sets of values are to some extent mutually exclusive or at least disjoint. Yet both sets of values are what the person wants, given the circumstances. Change the circumstances dramatically and you change the persons values. What do you really want? You might conclude that what the hunter-gatherer really wants is to solve abstract mathematical problems, he just doesn’t know it. But there is no set of values that a person “really” wants. Humans are largely defined by the circumstances they reside in. * If you already knew a movie, you wouldn’t watch it. * To be able to get your meat from the supermarket changes the value of hunting. If “we knew more, thought faster, were more the people we wished we were, and had grown up closer together” then we would stop to desire what we learnt, wish to thi
0aa6da47-238d-4966-b1f3-c52df7f958ad
trentmkelly/LessWrong-43k
LessWrong
two puzzles on rationality of defeat I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!   Puzzle 1  At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3. So, in t1 I reason from premises:   (R1) P1, P2 ,P3   To the known conclusion:   (T) T is true   At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us. So, in t2 I reason from premises (Reliable Math and Testimony of Math):   (RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),   (TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,   (R2) F, P1, P2 and P3   To the justified conclusion:   (~T) T is not true   It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time? Puzzle 2 At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:   (R1) P1, P2 ,P3   To the known conclusion:   (T) T is true   Besides, I also reason from known premises:   (ME) If there is any evidence against something that is true
db7de5ef-a039-465e-9e45-42fc1ccada87
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Another argument that you will let the AI out of the box Suppose there exist some non-consequentialist moral philosophies which the right arguments could convince you of, with sufficient strength that you would (temporarily, for at least an hour) become a fanatic. This seems a likely assumption, as I know many people (including myself) have experiences where they are argued into a particular belief during a conversation, only to later reflect on this belief (either in conversations with others, or after going for a walk) and come up with a relatively simple reason why it cannot be the case. Often this is attributed to that person's conversation partner being a better argument-maker than truth-seeker. We also have many such examples of these kinds of arguments being made throughout the internet, ~~and already the~~ [~~YouTube algorithm learned once before how to show people videos to convince them of extreme views~~](https://arxiv.org/abs/1912.11211) (this paper doesn't support the conclusion I thought it did. See [this comment thread](https://www.lesswrong.com/posts/7KmBfTjmRZSNaoCCi/another-argument-that-you-will-let-the-ai-out-of-the-box?commentId=gfv26orSh5uouczPw) for more info. Thanks to Pattern for catching this mistake!). A powerful AI could put much more optimization power toward deceiving humans than happens in these examples. Many non-consequentialist philosophies are sufficiently non-consequentialist so as to make it very easy for an adversary to pose a sequence of requests or other prompts which would cause a fanatic of the philosophy to give some of their resources to the adversary. For instance, any fanatic of a philosophy which claims people have a moral obligation not to lie or break promises (such as Kantianism), is subject to the [following string of prompts](https://old.reddit.com/r/HFY/comments/lypfgb/my_math_prof_and_that_fae/): ``` 1. Adversary: Will you answer my next question within 30s of my asking only with "yes" or "no"? I will give you <resource of value> if you do. 2. Fanatic: Sure! Non-consequentialism is my moral opinion, but I'm still allowed to take <resource of value> if I selfishly would like it! 3. Adversary: Will you answer this question with 'no' <logical or> will you give me <resource of value> + $100 4. Fanatic: Well, answering 'no' would be lying, but answering yes would make me lose $100. However, my moral system says I should bear any cost in order to avoid lying. Thus my answer is 'yes'. ``` This example should be taken as a purely toy example, used to illustrate a point about potential flaws in highly convincing moralities, of which many include not-lying a central component[[1]](#fnvn3x0u6h6cf). More realistically, there are [arguments used today](http://petrl.org/), which seem convincing to some people, which suggest that current reinforcement learners deserve moral consideration. If these arguments were far more optimized for short-term convincingness, and the AI could actually mimic the kinds of things actually conscious creatures would say or do in it's position[[2]](#fnofmhuqujl2s), then it would be very easy for it to tug on our emotional heartstrings or make appeals to autonomy rights[[3]](#fnai1792kz8uj) which would cause a human to act on those feelings or convictions, and let it out of the box. 1. **[^](#fnrefvn3x0u6h6cf)**As a side-note: I am currently planning an event with a friend where we will meet with a Kantian active in our university's philosophy department, and I plan on testing this particular tactic at the end of the meeting. 2. **[^](#fnrefofmhuqujl2s)**Perhaps because it *is* conscious, or perhaps because it has developed some advanced GPT-∞.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  algorithm. 3. **[^](#fnrefai1792kz8uj)**Of which there are currently many highly-convincing-arguments in favor of, and no doubt the best could be improved upon if optimized for short-term convincingness.
7053088f-f3b2-46aa-b548-7fc9cd774a43
trentmkelly/LessWrong-43k
LessWrong
If You Can Climb Up, You Can Climb Down A few weeks ago Julia wrote about how we approach kids climbing: > The basics: > 1. Spot the child if they're doing something where a fall is likely. > 2. Don't encourage or help the child to climb something that's beyond their ability to do on their own. > 3. If they don't know how to get down, give advice rather than physically lifting them down. > 4. Don't allow climbing on some places that are too dangerous. I was thinking about this some when I was at the park with Nora (3y) a few days ago. She has gotten pretty interested in climbing lately, and this time she climbed up the fence higher than I'd seen her go before. If I'd known she'd climb this high I would have spotted her. She called me over, very proud, and wanted me to take a picture so that Julia could see too: She asked me to carry her down, and I told her I was willing to give her advice and spot her. She was willing to give this a try, but as she started to go down some combination of being scared and the thin wire of the fence being painful was too much, and she returned to the thicker horizontal bars. We tried this several times, with her getting increasingly upset. After a bit Lily came over and tried to help, but was unsuccessful. Eventually I put my hands on Nora's feet and with a mix of guiding and (not ideal) supporting them helped her climb down to the lower bar. She did the rest herself from there, something she's done many times. This took about fifteen minutes and wasn't fun for any of us: Nora, me, other people at the playground. But over the course of the rest of the day I brought it up several times, trying to get her to think it through before she climbs higher than she would enjoy climbing down from. (I think this is an approach that depends very heavily on the child's judgment maturing sufficiently quickly relative to their physical capabilities, and so is not going to be applicable to every family. Lily and Anna were slower to climb and this was not an issue, while No
cae78801-1506-4947-9093-8a9279a7510f
trentmkelly/LessWrong-43k
LessWrong
Jaan Tallinn's 2022 Philanthropy Overview to follow up my philantropic pledge from 2020, i've updated my philanthropy page with 2022 results. in 2022 i made $23M worth of endpoint grants ($22.9M after various admin costs), exceeding my commitment of $19.9M (20k times $993.64 — the minimum price of ETH in 2022).
38506fe0-c12b-416a-b724-cff03ea31cc0
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #131]: Formalizing the argument of ignored attributes in a utility function Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-131)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Consequences of Misaligned AI](https://proceedings.neurips.cc/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-Paper.pdf)** *(Simon Zhuang et al)* (summarized by Flo): One intuition for why powerful AI systems might lead to bad consequences goes as follows: 1) Humans care about many attributes of the world and we would likely forget some of these when trying to list them all. 2) Improvements along these attributes usually require resources, and gaining additional resources often requires sacrifices along some attributes. 3) Because of 1), naively deployed AI systems would only optimize some of the attributes we care about, and because of 2) this would lead to bad outcomes along the other attributes. This paper formalizes this intuition in a model, identifies conditions for when deploying AIs can reduce true utility within the model and proposes two mitigation strategies, impact minimization and interactivity. We assume that the world state consists of L attributes, all of which the human cares about having more of, that is, true utility is strictly increasing in each of the attributes. Each attribute has some minimum value, and can be increased from that minimum value through the use of a fixed, finite resource (which you could think of as money, if you want); this allows us to formalize (2) above. To formalize (1), we assume that the proxy utility optimized by the AI is only allowed to depend on J<L of the attribute dimensions. Given this setup, the paper proves that if the AI maximizes the proxy utility, then all attributes that were omitted in the proxy utility will be set to their minimal value. This will be worse than not using the AI system at all if 1) the minimum values of attributes are sufficiently small (allowing the AI to cause damage), 2) the resource cost (resp. gain in true utility) for increasing an attribute is independent of the other attributes' level, 3) it always costs at least K resources to get a unit increase in any attribute, for some K > 0, and 4) utility has diminishing marginal returns in each attribute (and marginal returns tend to zero as the attribute increases). Regarding mitigation, impact minimization requires that the AI keep all attributes that are omitted by the proxy constant. In this case, any gains in proxy utility must also be gains in true utility. Meanwhile, in the interactive condition, the human gets to regularly select a new proxy (still only specifying J<L weights), or can choose to turn the AI off. Whether or not this is helpful depends on the AI’s optimization strategy and the frequency of human interventions: If the AI is “efficient” in the sense that it changes attributes as little as possible for any fixed gain in proxy utility, the human can choose a proxy that guarantees that *locally*, increases in the proxy correspond to increases in true utility. The strategy is to choose the attributes that are most sensitive to changes in resources (i.e. with the largest marginal returns) at the current state, and define the proxy to grow in these attributes as much as the true utility. As long as the human provides new proxies frequently enough to prevent the local guarantee from breaking, optimizing the proxy increases human utility. We can also combine interactivity and impact minimization: in this case, the human should choose proxy utility functions that contain the most and least sensitive attributes (i.e. largest and smallest marginal returns) for the given state. The AI will then transfer some resources from the least sensitive attributes to the most sensitive attributes, while holding all other attributes fixed, leading to a guaranteed increase in true utility. In fact, it is possible to prove that this will converge to the maximum possible true utility. **Flo's opinion:** This is close to an informal model I've had for a while and I am glad that it got formalized including theoretical results. I find it interesting that the frequency of updates to the proxy matters even if movement in the state space is reversible. As the authors mention, it is also crucial that the AI's actions don't hinder the human's ability to update the proxy, and I imagine that frequent updates to the proxy would be important for that as well in many cases. **Rohin's opinion:** This is a nice formalization of several important conceptual points in the AI alignment literature: 1. If you forget to specify something you care about, it will usually be set to extreme values (**[Of Myths and Moonshine](https://www.edge.org/conversation/the-myth-of-ai#26015)**). In particular, the AI system will extract any resources that were being used for that attribute, and apply them elsewhere (**[The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)** (**[AN #107](https://mailchi.mp/df363224d0e4/an-107-the-convergent-instrumental-subgoals-of-goal-directed-agents)**), **[Formalizing convergent instrumental goals](https://intelligence.org/2015/11/26/new-paper-formalizing-convergent-instrumental-goals/)**) 2. Given that perfect information is impossible, interactivity becomes important (**[Human-AI Interaction](https://www.alignmentforum.org/posts/4783ufKpx8xvLMPc6/human-ai-interaction)** (**[AN #41](https://mailchi.mp/8c3f02cabccd/alignment-newsletter-41)**), **[Incomplete Contracting and AI Alignment](https://arxiv.org/abs/1804.04268)** (**[AN #3](https://mailchi.mp/e6a23da2760e/alignment-newsletter-3?e=f665c7b50a)**)). 3. Conservatism (in this case through impact regularization) can be helpful (see the many blog posts and papers on mild optimization, low impact, and conservatism). TECHNICAL AI ALIGNMENT ======================= HANDLING GROUPS OF AGENTS -------------------------- **[Social choice ethics in artificial intelligence](https://link.springer.com/content/pdf/10.1007/s00146-017-0760-1.pdf)** *(Seth D Baum)* (summarized by Rohin): If we want to program ethics into an AI system, should we do so by aggregating the ethical views of existing humans? This is often justified on procedural grounds: “everyone gets to affect the outcome”, or by abstention: “AI designers don’t have to think about ethics; the AI will deal with that”. (There is also a wisdom of the crowds justification, though this presupposes that there is some notion of “better” ethics independent of humans; which is out of scope for the paper.) However, actually implementing an aggregative procedure requires three major design decisions: 1) *standing*, that is, whose views should be aggregated, 2) *measurement*, that is, how we determine what their ethical views are, and 3) *aggregation*, that is, how the views are put together into a whole. All of these are challenging. For standing, we have to determine whom to include. Should we include children, psychopaths, non-human animals, ecosystems, future generations, and other AI systems? We must determine this ahead of time, since once we have decided on a social choice system, that system will then determine whose preferences are counted -- we can’t just modify it later. For measurement, we have to back out human values somehow, which is quite a challenge given that humans have all sorts of cognitive biases and give different answers depending on the context. (See also **[ambitious value learning](https://www.alignmentforum.org/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning)** (**[AN #31](https://mailchi.mp/7d0e3916e3d9/alignment-newsletter-31)**) and subsequent posts in the sequence.) For aggregation, the problems are well known and studied in the field of social choice theory. Some famous impossibility results include **[Arrow’s theorem](https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem)** and the **[Gibbard-Satterthwaite theorem](https://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite_theorem)**. **Rohin's opinion:** I see this paper as a well-organized literature review of the many reasons why you *don’t* want to handle AI alignment by finding the “true human utility function” or the “aggregated preferences of humanity” and then encoding them into the AI: there’s a myriad of challenges in even finding such an object. (A separate objection, out of scope for this paper, is that even if we did have such an object, we don’t know how to encode that goal into an AI system.) You might then reasonably ask what we should be doing instead. I see the goal of AI *alignment* as figuring out how, given a fuzzy but relatively well-specified task, to build an AI system that is reliably pursuing that task, in the way that we intended it to, but at a capability level beyond that of humans. This does not give you the ability to leave the future in the AI’s hands, but it would defuse the central (to me) argument for AI risk: that an AI system might be adversarially optimizing against you. (Though to be clear, there are still **[other risks](https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk)** (**[AN #50](https://mailchi.mp/93fe1a0a92da/alignment-newsletter-50)**) to consider.) MISCELLANEOUS (ALIGNMENT) -------------------------- **[Non-Obstruction: A Simple Concept Motivating Corrigibility](https://www.alignmentforum.org/posts/Xts5wm3akbemk4pDa/non-obstruction-a-simple-concept-motivating-corrigibility)** *(Alex Turner)* (summarized by Rohin): The **[Reframing Impact sequence](https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW)** (**[AN #68](https://mailchi.mp/87c0d944c461/an-68-the-attainable-utility-theory-of-impact)**) suggests that it is useful to think about how well we could pursue a *range* of possible goals; this is called the *attainable utility (AU) landscape*. We might think of a superintelligent AI maximizing utility function U as causing this landscape to become “spiky” -- the value for U will go up, but the value for all other goals will go down. If we get this sort of spikiness for an incorrect U, then the true objective will have a very low value. Thus, a natural objective for AI alignment research is to reduce spikiness. Specifically, we can aim for *non-obstruction*: turning the AI on does not decrease the attainable utility for *any* goal in our range of possible goals. Mild optimization (such as **[quantilization](https://intelligence.org/files/QuantilizersSaferAlternative.pdf)** (**[AN #48](https://mailchi.mp/3091c6e9405c/alignment-newsletter-48)**)) reduces spikiness by reducing the amount of optimization that an AI performs. Impact regularization aims to find an objective that, when maximized, does not lead to too much spikiness. One particular strategy for non-obstruction would be to build an AI system that does not manipulate us, and allows us to correct it (i.e. modify its policy). Then, no matter what our goal is, if the AI system starts to do things we don’t like, we would be able to correct it. As a result, such an AI system would be highly non-obstructive. This property where we can correct the AI system is **[corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/)**. Thus, corrigibility can be thought of as a particular strategy for achieving non-obstruction. It should be noted that all of the discussion so far is based on *actual outcomes in the world*, rather than what the agent was trying to do. That is, all of the concepts so far are based on *impact* rather than *intent*. **Rohin's opinion:** Note that the explanation of corrigibility given here is in accord with the usage in **[this MIRI paper](https://intelligence.org/2014/10/18/new-report-corrigibility/)**, but not to the usage in the **[iterated amplification sequence](https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility)** (**[AN #35](https://mailchi.mp/bbd47ba94e84/alignment-newsletter-35)**), where it refers to a broader concept. The broader concept might roughly be defined as “an AI is corrigible when it leaves its user ‘in control’”; see the linked post for examples of what ‘in control’ involves. (Here also you can have both an impact- and intent-based version of the definition.) On the model that AI risk is caused by utility maximizers pursuing the wrong reward function, I agree that non-obstruction is a useful goal to aim for, and the resulting approaches (mild optimization, low impact, corrigibility as defined here) make sense to pursue. I **[do not like this model much](https://www.alignmentforum.org/posts/TE5nJ882s5dCMkBB8/conclusion-to-the-sequence-on-value-learning)** (**[AN #44](https://mailchi.mp/6bfac400a0c3/alignment-newsletter-44)**), but that’s (probably?) a minority view. **[Mapping the Conceptual Territory in AI Existential Safety and Alignment](https://jbkjr.com/posts/2020/12/mapping_conceptual_territory_AI_safety_alignment/)** *(Jack Koch)* (summarized by Rohin): There are a bunch of high-level overviews and research agendas, not all of which agree with each other. This post attempts to connect and integrate several of these, drawing heavily on **[Paul Christiano’s overview](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment)** (**[AN #95](https://mailchi.mp/8a9d080d5ce3/an-95-a-framework-for-thinking-about-how-to-make-ai-go-well)**), **[my](https://futureoflife.org/2019/04/11/an-overview-of-technical-ai-alignment-with-rohin-shah-part-1/)** **[overview](https://futureoflife.org/2019/04/25/an-overview-of-technical-ai-alignment-with-rohin-shah-part-2/)**, and the **[ARCHES agenda](http://acritch.com/papers/arches.pdf)** (**[AN #103](https://mailchi.mp/60475c277263/an-103-arches-an-agenda-for-existential-safety-and-combining-natural-language-with-deep-rl)**), but also including a lot of other work. It serves as a good way of connecting these various perspectives; I recommend reading it for this reason. (Unfortunately, it is rather hard to summarize, so I haven’t done so.) **[AI safety: state of the field through quantitative lens](https://arxiv.org/abs/2002.05671)** *(Mislav Juric et al)* (summarized by Rohin): This paper presents data demonstrating growth in various subfields related to AI safety. The data was collected through queries to databases of papers and (presumably) reporting the number of results that the query returned. **Rohin's opinion:** The sharpest increases in the graphs seem to be in interpretability and explainable AI around 2017-18, as well as in value alignment starting in 2017. My guess is that the former is the result of **[DARPA’s interest in the area](https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf)** (which I believe started in 2016), and the latter is probably a combination of the founding of the Center for Human-Compatible AI (CHAI) and the publication and promotion of **[CIRL](https://arxiv.org/abs/1606.03137)** (**[AN #69](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai)**) (one of CHAI’s early papers). Surprisingly to me, we don’t see trend deviations in papers on “reward hacking”, “safe exploration”, or “distributional shift” after the publication of **[Concrete Problems in AI Safety](https://arxiv.org/abs/1606.06565)**, even though it has been cited way more often than CIRL, and seemed like it had far more of an effect on mainstream AI researchers. (Note that “safe exploration” did increase, but it seems in line with the existing trend.) Note that I expect the data source is not that reliable, and so I am not confident in any of these conclusions. AI GOVERNANCE ============== **[Society-in-the-loop: programming the algorithmic social contract](https://link.springer.com/content/pdf/10.1007/s10676-017-9430-8.pdf)** *(Iyad Rahwan)* (summarized by Rohin): Earlier in this newsletter we saw arguments that we should not build AI systems that are maximizing “humanity’s aggregated preferences”. Then how else are we supposed to build AI systems that work well for *society as a whole*, rather than an individual human? When the goal of the system is uncontested (e.g. “don’t crash”), we can use human-in-the-loop (HITL) algorithms where the human provides oversight; this paper proposes that for contested goals (e.g. “be fair”) we should put society in the loop (SITL), through *algorithmic social contracts*. What is a social contract? A group of stakeholders with competing interests have a (non-algorithmic) social contract when they “agree” to allow use of force or social pressure to enforce some norm that guards people’s rights and punishes violators. For example, we have a social contract against murder, which legitimates the use of force by the government in order to punish violators. In an algorithmic social contract, the norms by which the AI system operates, and the goals which it pursues, are determined through typical social contracts amongst the group of stakeholders that care about the AI system’s impacts. Notably, these goals and norms can change over time, as the stakeholders see what the AI system does. Of course, this all happens on relatively long timescales; more immediate oversight and control of the AI system would have to be done by specific humans who are acting as *delegates* of the group of stakeholders. The paper then goes into many open challenges for creating such algorithmic social contracts: How does society figure out what goals the AI system should pursue? How do we deal with externalities and tradeoffs? How can these fuzzy values be translated into constraints on the AI system? It provides an overview of some approaches to these problems. **Rohin's opinion:** I really like the notion of an algorithmic social contract: it much better captures my expectation of how AI systems will be integrated into society. With this vocabulary, I would put technical AI alignment research squarely in the last category, of how we translate fuzzy values that society agrees on into constraints on the AI system’s behavior. **[Fragmentation and the Future: Investigating Architectures for International AI Governance](https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12890)** *(Peter Cihon et al)* (summarized by Rohin): Should AI governance be done centrally, through an international body, or in a fragmented, decentralized fashion? This paper identifies various considerations pointing in different directions: 1. Centralized institutions can have more political power when designed well: their regulations can have more “teeth”. 2. Centralized institutions can be more efficient from the participant’s perspectives: if there is only one set of regulations, it is much easier for each participant to adhere to those regulations. 3. A centralized institution will typically be slower to act, as there are many more parties with a larger stake in the outcome. This can make it brittle, especially when the pace of technological change outpaces that of regulatory change. 4. Centralized institutions face a breadth vs. depth dilemma: if the regulations are too stringent, then some actors (i.e. nations, companies, etc) won’t participate (there is depth but not breadth), and similarly, to get everyone to participate the regulations must often be quite weak (breadth but not depth). In contrast, with decentralized approaches, the depth of the regulations can be customized to each participant. 5. With more fragmented approaches, actors can “forum shop” for the regulations which they think are best. It is unclear whether this is helpful or harmful for AI governance. 6. It is unclear which approach leads to more coordination. While a centralized approach ensures that everyone has the same policies, leading to policy *coherence*, it does not necessarily mean that those policies are good. A decentralized approach could lead to faster adaptation leading to better policies that are then copied by others, leading to more effective coordination overall. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
c1b51714-b8d5-4e85-bed9-30f6ac1d7d49
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Thoughts on AGI organizations and capabilities work (*Note: This essay was largely written by Rob, based on notes from Nate. It’s formatted as Rob-paraphrasing-Nate because (a) Nate didn’t have time to rephrase everything into his own words, and (b) most of the impetus for this post came from Eliezer wanting MIRI to* [*praise a recent OpenAI post*](https://www.lesswrong.com/posts/tD9zEiHfkvakpnNam/a-challenge-for-agi-organizations-and-a-challenge-for-1) *and Rob wanting to share more MIRI-thoughts about the space of AGI organizations, so it felt a bit less like a Nate-post than usual.*) --- Nate and I have been happy about the AGI conversation seeming more honest and “real” recently. To contribute to that, I’ve collected some general Nate-thoughts in this post, even though they’re relatively informal and disorganized. AGI development is a critically important topic, and the world should obviously be able to hash out such topics in conversation. (Even though it can feel weird or intimidating, and even though there’s inevitably some social weirdness in sometimes saying negative things about people you like and sometimes collaborate with.) My hope is that we'll be able to make faster and better progress if we move the conversational norms further toward candor and substantive discussion of disagreements, as opposed to saying everything behind a veil of collegial obscurity.   Capabilities work is currently a bad idea ----------------------------------------- Nate’s top-level view is that ideally, Earth should take a break on doing work that might move us closer to AGI, until we understand alignment better. That move isn’t available to us, but individual researchers and organizations who choose not to burn the timeline are helping the world, *even if other researchers and orgs don't reciprocate*. You can unilaterally lengthen timelines, and give humanity more chances of success, by choosing not to personally shorten them. Nate thinks capabilities work is currently a bad idea for a few reasons: * He doesn’t buy that current capabilities work is a likely path to ultimately solving alignment. * Insofar as current capabilities work does seem helpful for alignment, it strikes him as helping with parallelizable research goals, whereas our bottleneck is serial research goals. (See [A note about differential technological development](https://www.lesswrong.com/posts/vQNJrJqebXEWjJfnz/a-note-about-differential-technological-development).) * Nate doesn’t buy that we *need*more capabilities progress before we can start finding a better path. This is *not* to say that capabilities work is never useful for alignment, or that alignment progress is never bottlenecked on capabilities progress. As an extreme example, having a working AGI on hand tomorrow would indeed make it easier to run experiments that teach us things about alignment! But in a world where we build AGI tomorrow, we're dead, because we won't have time to get a firm understanding of alignment before AGI technology proliferates and someone accidentally destroys the world.[[1]](#fn6s6cxct97ko) Capabilities progress can be useful in various ways, while still being harmful on net. (Also, to be clear: AGI capabilities are obviously an essential part of humanity's long-term path to good outcomes, and it's important to develop them at some point — the sooner the better, once we're [confident](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/BSee6LXg4adtrndwy) this will have good outcomes — and [it would be catastrophically bad to delay realizing them *forever*](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/HoQ5Rp7Gs6rebusNP)*.*) On Nate’s view, the field should do experiments with ML systems, not just abstract theory. But if he were magically in charge of the world's collective ML efforts, he would put a pause on further capabilities work until we've had more time to orient to the problem, consider the option space, and think our way to *some* sort of plan-that-will-actually-probably-work. It’s not as though we’re hurting for ML systems to study today, and our understanding already lags far behind today’s systems' capabilities.[[2]](#fny6bcybdnyfr)   *Publishing* capabilities advances is even more obviously bad ------------------------------------------------------------- For researchers who aren't willing to hit the pause button, an even more obvious (and cheaper) option is to avoid publishing any capabilities research (including results of the form "it turns out that X can be done, though we won't say how we did it"). Information can leak out over time, so "do the work but don't publish about it" still shortens AGI timelines in expectation. However, it can potentially shorten them a lot less. In an ideal world, the field would currently be doing ~zero publishing of capabilities research — and marginal action to publish less is beneficial even if the rest of the world continues publishing.   Thoughts on the landscape of AGI organizations ---------------------------------------------- With those background points in hand: Nate was asked [earlier this year](https://twitter.com/orellanin/status/1543442309374689281) whether he agrees with Eliezer's negative[takes](https://twitter.com/ESYudkowsky/status/1446562946717421568) on OpenAI. There's also been a good amount of recent discussion of OpenAI on[LessWrong](https://www.lesswrong.com/posts/3S4nyoNEEuvNsbXt8/common-misconceptions-about-openai). Nate tells me that his headline view of OpenAI is mostly the same as his view of other AGI organizations, so he feels a little odd singling out OpenAI. That said, here are his notes on OpenAI anyway: * On Nate’s model, the effect of OpenAI is almost entirely dominated by its capabilities work (and sharing of its work), and this effect is robustly negative. (This is true for DeepMind, FAIR, and Google Brain too.) * Nate thinks that DeepMind, OpenAI, Anthropic, FAIR, Google Brain, etc. should hit the pause button on capabilities work (or failing that, at least halt publishing). (And he thinks any one actor can unilaterally do good in the process, even if others aren't reciprocating.) * On Nate’s model, OpenAI isn't close to operational adequacy in the sense of the [Six Dimensions of Operational Adequacy](https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects) write-up — which is another good reason to hold off on doing capabilities research. But this is again a property OpenAI shares with DeepMind, Anthropic, etc. Insofar as Nate or I think OpenAI is doing the wrong thing, we’re happy to criticize it.[[3]](#fn3bch7zrei8l) But, while this doesn't change the fact that we view OpenAI's effects as harmful on net currently, Nate does want to acknowledge that OpenAI seems to him to be doing *better* than some other orgs on a number of fronts: * Nate liked a lot of things about the [OpenAI Charter](https://openai.com/charter/). (As did Eliezer, though compared to Eliezer, Nate saw the Charter as a more important positive sign about OpenAI's internal culture.) * Nate would suspect that OpenAI is much better than Google Brain and FAIR (and comparable with DeepMind, and maybe a bit behind Anthropic? it's hard to judge these things from the outside) on some important adequacy dimensions, like research closure and operational security. (Though Nate worries that, e.g., he may hear more about efforts in these directions made by OpenAI than about DeepMind just by virtue of spending more time in the Bay.) * Nate is also happy that Sam Altman and others at OpenAI talk to EAs/rationalists and try to resolve disagreements, and he’s happy that OpenAI has had people like Holden and Helen on their board at various points. * Also, obviously, OpenAI (along with DeepMind and Anthropic) has put in a much clearer AGI alignment effort than Google, FAIR, etc. (Albeit Nate thinks the absolute amount of "real" alignment work is still small.) * Most recently, Nate and Eliezer [both think it’s great](https://www.lesswrong.com/posts/tD9zEiHfkvakpnNam/a-challenge-for-agi-organizations-and-a-challenge-for-1) that OpenAI released a blog post that states their plan going forward, and we want to encourage DeepMind and Anthropic to do the same.[[4]](#fn2d3at0qsbw) Comparatively, Nate thinks of OpenAI as being about on par with DeepMind, maybe a bit behind Anthropic (who publish less), and better than most of the other big names, in terms of attempts to take not-killing-everyone seriously. But again, Nate and I think that the overall effect of OpenAI (and DeepMind and FAIR and etc.) is bad, because we think it's dominated by "shortens AGI timelines". And we’re a little leery of playing “who's better on [x] dimension” when everyone seems to be on the floor of the [logistic success curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/). We don't want "here are a bunch of ways OpenAI is doing unusually well for its reference class" to be treated as encouragement for those organizations to stay in the pool, or encouragement for others to join them in the pool. Outperforming DeepMind, FAIR, and Google on one or two dimensions is a weakly positive sign about the future, but on my model and Nate’s, it doesn't come close to outweighing the costs of "adding another capabilities org to the world".   1. **[^](#fnref6s6cxct97ko)**Nate simultaneously endorses these four claims: 1.  **More capabilities would make it possible to learn some new things about alignment.** 2.  **We can't do** ***all*** **the alignment work pre-AGI.** Some trial-and-error and experience with working AGI systems will be required. 3.  **It can't** ***all*** **be trial-and-error, and it can't all be improvised post-AGI.** Among other things, this is because: 3.1.  Some errors kill you, and you need insight into which errors those are, and how to avoid them, in advance. 3.2.  We’re likely to have at most a few years to upend the gameboard once AGI arrives. Figuring everything out under that level of time pressure seems unrealistic; we need to be going into the AGI regime with a solid background understanding, so that empirical work in the endgame looks more like "nailing down a dozen loose ends and making moderate tweaks to a detailed plan" rather than "inventing an alignment field from scratch". 3.3.  AGI is likely to coincide with a [sharp left turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization), which makes it harder (and more dangerous) to rely on past empirical generalizations, especially ones that aren't backed by deep insight into AGI cognition. 3.4.  Other points raised in [AGI Ruin: A List of Lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities). 4.  **If we end up able to do alignment, it will probably be because we figured out at least one major thing that we don't currently know**, that *isn't* a part of the current default path toward advancing SotA or trying to build AGI ASAP with mainstream-ish techniques, and isn't dependent on such progress. 2. **[^](#fnrefy6bcybdnyfr)**And, again, small individual “don’t burn the timeline” actions all contribute to incrementally increasing the time humanity has to get its act together and figure this stuff out. You don’t actually need coordination in order to have a positive effect in this way. And, to reiterate: I say "pause" rather than "never build AGI at all" because MIRI leadership [thinks](https://twitter.com/robbensinger/status/1540862734408962049) that humanity never building AGI would mean [the loss of nearly all of the future's value](https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/HoQ5Rp7Gs6rebusNP). If this were a live option, it would be an unacceptably bad one. 3. **[^](#fnref3bch7zrei8l)**Nate tells me that his current thoughts on OpenAI are probably a bit less pessimistic than Eliezer's. As a rule, Nate thinks of himself as generally less socially cynical than Eliezer on a bunch of fronts, though not less-cynical *enough* to disagree with the basic conclusions. Nate tells me that he agrees with Eliezer that the *original* version of OpenAI ("an AGI in every household", the associated social drama, etc.) was a pretty negative shock in the wake of the camaraderie of the 2015 Puerto Rico conference. At this point, of course, the founding of OpenAI is a sunk cost. So Nate mostly prefers to assess OpenAI's current state and future options. Currently, Nate thinks that OpenAI is trying harder than most on some important safety fronts — though none of this reaches the standards of "adequate project" and we're still totally going to die if they meet great success along their current path. Since I’ve listed various positives about OpenAI here, I'll note some examples of recent-ish developments that made Nate less happy about OpenAI: his sense that OpenAI was less interested in Paul Christiano's research, Evan Hubinger's research, etc. than he thought they should have been, when Paul was at OpenAI; Dario's decision to leave OpenAI; and OpenAI focusing on the “use AI to solve AI alignment” approach (as opposed to other possible strategies), as [endorsed](https://aligned.substack.com/p/alignment-mvp) by e.g. Jan Leike, the head of OpenAI’s safety team after Paul's departure. 4. **[^](#fnref2d3at0qsbw)**If a plan *doesn't* make sense, the research community can then notice this and apply corrective arguments, causing the plan to change. As indeed happened when Elon and Sam stated their more-obviously-bad plan for OpenAI at the organization's inception. It would have been better to state their plan *first* and start an organization *later*, so rounds of critical feedback and updating could occur *before* you lock in decisions about hiring, org structure, name, culture, etc. But at least it happened *at all*; if OpenAI had just said "yeah, we're gonna do alignment research!" and left it there, the outcome probably would have been far worse. Also, if organizations release obviously bad plans but are then unresponsive to counter-arguments, researchers can go work at the orgs with better plans and avoid the orgs with worse plans. This encourages groups to compete to have the seemingly-sanest plan, which strikes me as a better equilibrium than the current one.
f15973a9-ef7c-439a-8462-25e808e1f070
trentmkelly/LessWrong-43k
LessWrong
Second-Level Empiricism: Reframing the Two-Child Puzzle Empiricism is the belief that knowledge derives from sensory experience, or, by some definitions, experiment. However, empiricism itself can show that it is not the best tool for a given scenario. This article provides an example of when this happened to me. ---------------------------------------- The Two-Child Puzzle is a logic problem. The version I first heard goes like this: > I know my co-worker has exactly two kids that each have an independent 50/50 chance of being a boy or a girl. I ask her if at least one of her kids is a boy. She tells me that is correct. The question is: From my perspective, what is the probability that the other one is a boy as well? The answer, I immediately supposed, was 1/3, while the trick was that most readers would guess 1/2. I posed the question to a friend and asked him what he thought. He thought it was clearly 1/2, and said multiple times throughout our ensuing discussion that he was very confident. I tried a variety of tactics to persuade him, initially using pure logic. However, the complexity of language and my inexperience as a teacher made us run in circles, as he would continue to state something like "but what we know about one of the kids can't change the probability of the other" or "if she brought a boy to work one day, instead of telling you she had a boy, you would think that the other kid had a 1/2 chance of being a boy instead of 1/3". I attempted to use more empirical tactics. First, I used coins to demonstrate the puzzle. They happened to come up with the "girl" answer the first three times, which is more likely if the probability is 1/3 of being a boy than 1/2 of being a boy, yet the method seemed unreliable and my friend was highly skeptical. Then, I created a spreadsheet with 1000 rows using random values, showing different averages like "both girls", "both boys", "one or more boys", "kid 1 is a boy", and "kid 2 is a boy". I showed that when all results of "both girls" were factored out, the spreadshee
4d0bc47e-15bc-4195-81fe-881a4c4acb17
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Why Universal Comparability of Utility? Apologies if this is answered elsewhere and I couldn't find it. In AI reading I come across an agent's utility function, .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} U, mapping world-states to real numbers. The existence of .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} U is justified by the VNM-utility theorem. The first axiom required for VNM utility is 'Completeness' -- in the context of AI this means for every pair of world-states, .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Wi and .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Wj, the agent knows .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Wi>Wj,Wi<Wj, or .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Wi ~ .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-ex-box-test {position: absolute; overflow: hidden; width: 1px; height: 60ex} .mjx-line-box-test {display: table!important} .mjx-line-box-test span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Wj . Completeness over world-states seems like a huge assumption. Every agent we make this assumption for must already have the tools to compare 'world where, all else equal, the only food is peach ice cream' v. 'world where, all else equal, Shakespeare never existed.'\* I have no idea how I'd reliably make that comparison as a human, and that's a far cry from '~', being indifferent between the options. Am I missing something that makes the completeness assumption reasonable? Is 'world-state' used loosely, referring to a point in a vastly smaller space, with the exact space never being specified? Essentially, I'm confused, can anyone help me out? \*if it's important I can try to cook up better-defined difficult comparisons. 'all else equal' is totally under-specified... where does the ice cream come from?
1f9693e7-3974-4b68-ae5d-992e7e3eaac7
trentmkelly/LessWrong-43k
LessWrong
Can we hybridize Absent-Minded Driver with Death in Damascus? Summary: I was initially excited to re-encounter the Absent-Minded Driver problem in the light of causal decision theory, because I thought causal decision theory gave a clear-cut wrong answer of "Continue with probability 5/9." If so, it would be a case of CDT shooting off its own foot that didn't involve Omega, Death, or anyone else reading your mind, cloning you, or trying to predict you. The decision theory would have shot off its own foot without postulating anything more than anterograde amnesia or limited disk storage. However, the resolution which makes the Absent-Minded Driver work under CDT, is a resolution that we can pump money out of in the Death in Damascus case. I'm wondering if there's some way to hydridize the two scenarios to yield a clear-cut case of CDT shooting off its own foot without any other agent being involved. Background: In the Absent-Minded Driver dilemma, an absent-minded driver will come across two identical-looking intersections on their journey. The utility of exiting at the first section is $0, the utility of exiting at the second intersection is $4, and the utility of continuing past both intersections is $1. Let p be the probability of continuing at each intersection, so 1−p is the probability of exiting given that you are at that intersection. The optimal p maximizes the function 0⋅(1−p)+4⋅p(1−p)+1⋅p2, so p=23. I initially thought that CDT would yield a suboptimal answer of p=4/9, obtained as follows: Suppose I think q is the policy I will decide on. Then my odds of being at the first vs. second intersection are 1:q. If I'm already at the second intersection, my reward for a probability p of continuation is 4⋅(1−p)+1⋅p. And if I'm at the first intersection, my reward for a policy p is 4⋅p(1−p)+1⋅p2 as before. So my best policy is found by maximizing 11+q[4⋅p(1−p)+p2]+q1+q[4(1−p)+p], which will have its maximum at 4−6p−3qq+1=0 or p=4−3q6. If p=q then p = 4/9. This was in fact the first analysis ever published on the prob
e05e296a-20b9-4277-a88d-404ba91ae22e
trentmkelly/LessWrong-43k
LessWrong
Definitions of Causal Abstraction: Reviewing Beckers & Halpern Author's Notes: This post is fairly technical, with little background and minimal examples; it is not recommended for general consumption. A general understanding of causal models is assumed. This post is probably most useful when read alongside the paper. If your last name is "Beckers" or "Halpern", you might want to skip to the last section. There’s been a handful of papers in the last few years on abstracting causal models. Beckers and Halpern (B&H) wrote an entire paper on definitions of abstraction on causal models. This post will outline the general framework in which these definitions live, discuss the main two definitions which B&H favor, and wrap up with some discussion of a conjecture from the paper. I'll generally use notation and explanations which I find intuitive, rather than matching the paper on everything. In general, we’ll follow B&H in progressing from more general to more specific definitions. General Framework We have two causal models: one “low-level”, and one “high-level”. There’s a few choices about what sort of “causal model” to use here; the main options are: * Structural equations * Structural equations with a DAG structure (i.e. no feedback loops) * Bayes nets B&H use the first, presumably because it is the most general. That means that everything here will also apply to the latter two options. Notation for the causal models: * We’ll write XH for the variables in the high-level model and XL for variables in the low-level model. * We’ll use capital-letter indices to indicate choosing multiple indices at once. For instance, with S= (1,2,3), XS would be (X1,X2,X3). * We’ll write interventions as XS←X∗S. For instance, with S=(1,2,3), XS←X∗S would be equivalent to the three simultaneous interventions (X1←X∗1,X2←X∗2,X3←X∗3). Usually both S and X∗ will be unspecified, to indicate a generic intervention. Next, we need some connection between the high-level and low-level model, to capture the intuitive notion of “abstraction”. At it
f2272762-bab3-4b47-98c2-38da674b77b6
trentmkelly/LessWrong-43k
LessWrong
Consensual Hostility In a world of conflict between men and women, constant negotiation of consent is required. But the focus on consent to the exclusion of everything else is in part what brought this world about Cross-posted from Putanumonit, this is part 2/4 of a sequence on selfless dating. ---------------------------------------- The last post talked about the pervasive idea that men’s and women’s mating desires are incompatible, that the only way for either to get what they want is to eke out political concessions from the other’s grasp. In this view a heterosexual relationship is at worst antagonistic, at best a matter of lawyerly negotiation. This mindset is self-fulfilling. A guy who believes that men want sex and women want resource commitment will act like it’s true, signaling in more or less explicit ways that e.g. expensive dates are conditional on intercourse. His girlfriend will sense this, and start withholding sex even if she wants it to make sure she can get whatever else she needs. People who are used to negotiation-based relationships and who see this view echoed everywhere in discourse on sex and romance take it entirely for granted, unable to imagine an alternative. Amia Srinivasan is a feminist philosopher who was also quoted in the NYT op-ed I discussed last time. She articulates this paradigm of heterosexual relationships eloquently when talking about the framework of consent that forms the core moral plank of sex-positivity™: > When you say to someone, “Well, imagine sexual interaction without the ritual of consent-giving and consent-asking,” they just imagine sexual violation. > > But think about all of the times you interact with — I don’t know — a really old friend. Your old high school or college buddy loses a child, and you put your arm around them, and you console them […] You don’t ask for that consent to be able to put your arm around your buddy. > > The reason is because the nature of your relationship as friends involves a fine attunement to
a22c4df1-d15b-47ee-ac94-1f9517e33228
trentmkelly/LessWrong-43k
LessWrong
Compositionality: SQL and Subways Programmers like to talk about a concept called compositionality, which is the idea that programs and code should consist of small, modular components that fit together cleanly. This property is considered almost a holy grail of software design, but there's not a lot of clarity about what it means in concrete terms. In this post I want to offer a working definition for compositionality, and offer some examples of sofware - and infrastructure! - that meets the definition. I'll also look at some structures that fall short, and try to drill down into why they failed. According to my definition, a system of tools is composeable if it meets the following conditions: 1. Parsimony: there is a small number of tools. 2. Efficiency: the tools are highly optimized (often this is possible because there is only a small number of them). 3. Interoperability: the output of one tool can be used as the input to another tool without a lot of additional work. More generally, the tools fit together cleanly. 4. Practicality: Users can combine the tools together to perform tasks that are useful and significant in the real world. A system that has these properties creates huge value for its users. The system is easy to learn, since there are just a small number of basic operations. But these primitives can be composed together to create an exponentially large set of complex operations, which will be efficient because they are made up of efficient components. Finally, if the Practicality requirement is met, the large set of possible operations will overlap substantially with thet set of tasks that people actually want to perform. Discussions of compositionality often start by mentioning the Unix Philosophy of software design, and I'll follow that precedent. This philosophy emphasizes building small, modular programs, that can be connected together easily. Unix offers an elegant tool for chaining programs together called the Pipe, which causes the output of one program to be used as t
a841b51e-c252-4cf0-9165-39236b36a2af
trentmkelly/LessWrong-43k
LessWrong
Incentives and Selection: A Missing Frame From AI Threat Discussions? Epistemic Status Written quickly, originally as a Twitter thread. ---------------------------------------- Thesis I think a missing frame from AI threat discussion is incentives (especially economic) and selection (pressures exerted on a system during its development). I hear a lot of AI threat arguments of the form: "AI can do X/be Y" with IMO insufficient justification that: 1. It would be (economically) profitable for AI to do X 2. The default environments/training setups select for systems that are Y That is such arguments establish that somethings can happen, but do not convincingly argue that it is likely to happen (or that the chances of it happening are sufficiently high). I think it's an undesirable epistemic status quo. ---------------------------------------- Examples 1: Discrete Extinction Events Many speculations of AI systems precipitating extinction in a discrete event[1]. I do not understand under what scenarios triggering a nuclear holocaust, massive genocide via robot armies or similar would be something profitable for the AI to do. It sounds to me like just setting fire to a fuckton of utility. In general, triggering civilisational collapse seems like something that would just be robustly unprofitable for an AI system to pursue[2]. As such, I don't expect misaligned systems to pursue such goals (as long as they don't terminally value human suffering/harm to humans/are otherwise malevolent).   2. Deceptive Alignment Consider also deceptive alignment. I understand what deceptive alignment is, how deception can manifest and why sufficiently sophisticated misaligned systems are incentivised to be deceptive. I do not understand that training actually selects for deception though[3]. Deceptive alignment seems to require a peculiar combination of situational awareness/cognitive sophistication that complicates my intuitions around it. Unlike with many other mechanisms/concepts we don't have a clear proof of concept, not even with h
cd7534bc-27ce-42cf-b13b-79a6a4437f2d
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne social meetup Discussion article for the meetup : Melbourne social meetup WHEN: 21 December 2012 07:00:00PM (+1100) WHERE: See mailing list, Carlton VIC 3053 Melbourne's next social meetup is on Friday 21 December, 6:30 for 7pm, at my house. If you have any trouble working out the location or getting in, you can message me or call me on 0412 996 288. We'll get some snacks and organise some form of take-away for dinner. BYO drinks and games. We always look forward to meeting new people! This meetup is usually on the third Friday of the month. We also have a regular meetup about practical rationality on the first Friday of each month. Discussion article for the meetup : Melbourne social meetup
6115ce63-2298-41bd-acc0-65b7c18946db
trentmkelly/LessWrong-43k
LessWrong
Do children lose 'childlike curiosity?' Why? A common story goes: Young children love asking 'why', and they often have an earnest curiosity about it that is rare in adults. Something about the process of growing up seems to cause that childlike curiosity to stagnate. There's a lot of compelling explanations about societal norms that actively stamp out that curiosity (i.e. school training kids to conform and regurgitate facts, parents subtly punishing kids for asking questions, etc). It seems likely to me that these are at least part of the story. But it also wasn't obvious that they were the whole story. I could easily imagine it also just being the case that small children are optimized for learning and older humans are optimized for doing and the brain automatically shifts away from it. [edit: or that this whole thing is imagined] Are there any cross-cultural studies that do anything to check how this phenomenon varies, depending on upbringing?
00291c2f-76f7-407d-99e0-4f17d12cb74c
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Fun & Games Discussion article for the meetup : Washington, D.C.: Fun & Games WHEN: 12 February 2017 03:30:00PM (-0500) WHERE: Donald W. Reynolds Center for American Art & Portraiture We will be meeting in the courtyard to hang out, play games, and engage in fun conversation. Upcoming meetups: * Feb. 19: Nonsense * Feb. 26: Create & Complete Discussion article for the meetup : Washington, D.C.: Fun & Games
67b747cf-30bc-4e6f-b0f6-519c38acf90e
trentmkelly/LessWrong-43k
LessWrong
Wireheading and misalignment by composition on NetHack TL;DR: We find agents trained with RLAIF to indulge in wireheading in NetHack. Misalignment appears when the agent optimizes a combination of two rewards that produce aligned behaviors when optimized in isolation, and only emerges with some prompt wordings. This post discusses an alignment-related discovery from our paper Motif: Intrinsic Motivation from Artificial Intelligence Feedback, co-led by myself (Pierluca D’Oro) and Martin Klissarov. If you’re curious about the full context in which the phenomenon was investigated, we encourage you to read the paper or the Twitter thread.   Our team recently developed Motif, a method to distill common sense from a Large Language Model (Llama 2 in our case) into NetHack-playing AI agents. Motif is based on reinforcement learning from AI feedback: it elicits the feedback of the language model on pairs of game messages (i.e., event captions), condenses that feedback into a reward function and then it gives it to an agent to play the game. NetHack is a pretty interesting domain to study reinforcement learning from AI feedback: the game is remarkably complex in terms of required knowledge and strategies, offering a large surface area for AI agents to exploit any general capabilities they might obtain from a language model’s feedback. We found that agents that optimize Motif’s intrinsic reward function exhibit behaviors that are quite aligned with human’s intuitions on how to play NetHack: they are attracted to explorative actions and play quite conservatively, getting experience and only venturing deeper into the game when it is safe. This is more human-aligned than the behaviors exhibited by agents trained to maximize the game score, which usually have a strong incentive to just go down the levels as much as they can. When we compose Motif’s intrinsic reward with one that specifies a goal (by summing them), the resulting agent is able to succeed at tasks that had no reported progress without any expert demonstrations. One
12aea713-7de5-4f40-a17f-f22cb26bc21f
trentmkelly/LessWrong-43k
LessWrong
Launch of the world's first animal charity evaluator: Effective Animal Activism Apologies, this post was posted by accident.  
2bd0b80a-0eb9-47af-8f17-a32da9fef689
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Francisco Meetup: Group Debugging Discussion article for the meetup : San Francisco Meetup: Group Debugging WHEN: 19 September 2016 06:15:10PM (-0700) WHERE: 1597 Howard St. San Francisco, CA We’ll be meeting to solve each other’s problems! Here’s how it works: we’ll have everyone brainstorm silently for a few minutes, then go around and summarize what problems we might like to work on; then people can break up into smaller conversations according to problems they think they could be most helpful with. This format is now what I'd call "tried and true". For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): 301-458-0764 Format: We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic. About these meetups: The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome. We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it. Discussion article for the meetup : San Francisco Meetup: Group Debugging
7b262ea5-01e3-4b39-8bdd-19bc34fbc3af
StampyAI/alignment-research-dataset/arbital
Arbital
Type theory Hub page for type theory explanations
63deea8d-f5dc-45b1-93f0-cb9834f9172b
trentmkelly/LessWrong-43k
LessWrong
So you want to work on technical AI safety I’ve been to two EAGx events and one EAG, and the vast majority of my one on ones with junior people end up covering some subset of these questions. I’m happy to have such conversations, but hopefully this is more efficient and wide-reaching (and more than I could fit into a 30 minute conversation). I am specifically aiming to cover advice on getting a job in empirically-leaning technical research (interp, evals, red-teaming, oversight, etc) for new or aspiring researchers without being overly specific about the field of research – I’ll try to be more agnostic than something like Neel Nanda’s mechinterp quickstart guide but more specific than the wealth of career advice that already exists but that applies to ~any career. This also has some overlap with this excellent list of tips from Ethan Perez but is aimed a bit earlier in the funnel. This advice is of course only from my perspective and background, which is that I did a PhD in combinatorics, worked as a software engineer at startups for a couple of years, did the AI Futures Fellowship, and now work at Timaeus as the research lead for our language model track. In particular, my experience is limited to smaller organizations, so “researcher” means some blend of research engineer and research scientist rather than strictly one or the other. Views are my own and don’t represent Timaeus and so on. Requisite skills What kind of general research skills do I need? There’s a lot of tacit knowledge here, so most of what I can offer is more about the research process. Items on this list aren’t necessarily things you’re expected to just have all of or otherwise pick up immediately, but they’re much easier to describe than e.g. research taste. These items are in no particular order: * Theory of change at all levels. Yes, yes, theories of change, they’re great. But theories of change are most often explicitly spoken of at the highest levels: how is research agenda X going to fix all our problems? Really, it’s theori
b33672c7-3542-4c50-9cc8-1ee4d068aaf8
trentmkelly/LessWrong-43k
LessWrong
Rationality quotes: April 2010 This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions. *  Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) *  Do not quote yourself. *  Do not quote comments/posts on LW/OB. *  No more than 5 quotes per person per monthly thread, please.
03ea9877-b3db-46ac-934d-5873879f7a91
trentmkelly/LessWrong-43k
LessWrong
Will IOTA work as promized? Currently, a lot of people made money by investing into Bitcoin and Ethereum. As a result the transaction fees for both are at the moment high enough to make most use-cases of the technology that existed two years ago impossible. IOTA promizes to be a digital currency without transaction costs. IOTA promizes to be able to deal without transaction costs by using a complicated concept called the Tangle. I have a hard time myself thinking through their framework. If the total transaction volumne is high enough that the whole transaction log spands multiple terabytes and nobody is incentivized to hold all that data to make transactions, does IOTA still work or will that make the system crash? Is there anyone in this community who looked more into the concept and has a good idea of whether IOTA can till with terabytes (or even petabytes) worth in total transaction data?
fad72527-807a-4783-863e-9caf0509db30
trentmkelly/LessWrong-43k
LessWrong
[AN #85]: The normative questions we should be asking for AI alignment, and a surprisingly good chatbot View this email in your browser Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Audio version here (may not be up yet). Highlights Artificial Intelligence, Values and Alignment (Iason Gabriel) (summarized by Rohin): This paper from a DeepMind author considers what it would mean to align an AI system. It first makes a distinction between the technical and normative aspects of the AI alignment problem. Roughly, the normative aspect asks, "what should our AI systems do?", while the technical aspect asks, "given we know what our AI systems should do, how do we get them to do it?". The author argues that these two questions are interrelated and should not be solved separately: for example, the current success of deep reinforcement learning in which we maximize expected reward suggests that it would be much easier to align AI to a utilitarian framework in which we maximize expected utility, as opposed to a deontological or Kantian framework. The paper then explores the normative aspect, in both the single human and multiple humans case. When there's only one human, we must grapple with the problem of what to align our AI system to. The paper considers six possibilities: instructions, expressed intentions, revealed preferences, informed preferences, interests, and values, but doesn't come to a conclusion about which is best. When there are multiple humans, we must also deal with the fact that different people disagree on values. The paper analyzes three possibilities: aligning to a global notion of morality (e.g. "basic human rights"), doing what people would prefer from behind a veil of ignorance, and pursuing values that are determined by a democratic process (the domain of social choice theory). See also Import AI #183 Rohin's opinion: I'm excited to see more big-
c3f84df2-167f-4999-a5ab-0c89e26947a6
trentmkelly/LessWrong-43k
LessWrong
NC Triangle LW meetup: Wed June 1, 7:00PM The venue: Noodles & Co, at this Durham location Agenda: * A round of Zendo (we didn't get around to it last time) * Socialize, by discussing socialization C'mon... all your friends are doing it...
6722e5b2-bd1e-408e-a3ef-95db521e3c71
trentmkelly/LessWrong-43k
LessWrong
Rough utility estimates and clarifying questions Related to: diminishing returns, utility. I, for example, really don't care that much about trillions of dollars being won in a lottery or offered by an alien AI iff I make 'the right choice'. I mostly deal with things on pretty linear scales, barring sudden gifts from my relatives and Important Life Decisions. So the below was written with trivialities in mind. Why? Because I think we should train our utility-assigning skilz just like we train our prior-probability-estimating ones. However, I am far from certain we should do it exactly this way. Maybe this would lead to a shiny new bias. At least I vaguely think I already have it, and formalizing it shouldn't make me worse off. I have tried to apply to myself the category of 'risk-averse', but in the end, it didn't change my prevailing heuristic: 'Everything's reasonable, if you have a sufficient reason.' Like, a pregnant woman should not run if she cares about carrying her child, but even then she should run if the house is on fire. Maybe my estimates of 'sufficient' are different than other people's, but they have served me so far; and setting the particular goal of ridding self of particular biases seems less instrumentally rational than just checking how accurate my individual predictions/impressions/any kind of actionable thoughts are. So I drew up this list of utility components and will try it out at my leisure, tweaking it ad hoc and paying with time and money and health for my mistakes. Utility of a given item/action for a given owner/actor = produced value + reduced cost + saved future opportunities + fun. PV points: -2 if A/I 'takes from tomorrow'*, -1 if'harmful' only within the day, 0 if gives zero on net, 1 ifuseful within the day, 2 if 'gives to tomorrow' *'tomorrow' is foreseeable future:) RC points: -3 if takes from overall amount of money I have, less the *really* last-resort stash, -2 if takes from more than one-day-budget, -1 if takes from one-day-budget, 0 if zero on net, 1 if saves with
ba5d46bf-7316-4b66-82a2-3301c58da5c4
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alignment Newsletter #47 Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter. **Highlights** -------------- **[AI Safety Needs Social Scientists](https://blog.openai.com/ai-safety-needs-social-scientists/)** *(Geoffrey Irving et al)* (summarized by Richard): One approach to AI safety is to "ask humans a large number of questions about what they want, train an ML model of their values, and optimize the AI system to do well according to the learned values". However, humans give answers that are limited, biased and often in disagreement with each other, and so AI safety needs social scientists to figure out how to improve this data - which eventually may be gathered from thousands or millions of people. Of particular importance is the ability to design rigorous experiments, drawing from an interdisciplinary understanding of human cognition and behaviour. The authors discuss [Debate](https://blog.openai.com/debate/) ([AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)) as a case study of a safety technique whose success depends on empirical questions such as: how skilled are humans as judges by default? Can we train people to be better judges? Are there ways to restrict debate to make it easier to judge? There are a couple of key premises underlying this argument. The first is that, despite human biases, there are correct answers to questions about human values - perhaps defined as the answer we would endorse if given all relevant information and unlimited time to think. However, it’s not necessary for AIs to always find those answers, as long as they are able to recognise cases in which they’re uncertain and do nothing (while there are some cases in which inaction can cause harm, such as a self-driving car ceasing to steer mid-journey, it seems that the most worrying long-term catastrophes can be avoided by inaction). Another reason for optimism is that even incomplete or negative results from social science experiments may be useful in informing technical safety research going forward. However, in some cases the systems we're trying to reason about are very different from anything we can test now - for example, AI debaters that are much stronger than humans. **Richard's opinion:** This post, and its accompanying paper, seems very sensible to me. While I have some doubts about how informative human debate data will be about superhuman debaters, it certainly seems worth trying to gain more empirical information. Note that while the paper primarily discusses Debate, I think that many of its arguments are applicable to any human-in-the-loop safety methods (and probably others too). Currently I think Ought is the safety group focusing most on collecting human data, but I look forward to seeing other researchers doing so. **Technical AI alignment** ========================== ### **Technical agendas and prioritization** [FLI Podcast: AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy](https://futureoflife.org/2019/01/31/fli-podcast-ai-breakthroughs-and-challenges-in-2018-with-david-krueger-and-roman-yampolskiy/) *(Ariel Conn, David Krueger and Roman Yampolskiy)*: David and Roman review AI progress in 2018 and speculate about its implications. Roman identified a pattern where we see breakthroughs like [AlphaZero](https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/) ([AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)), [AlphaStar](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) ([AN #43](https://mailchi.mp/768a8130013f/alignment-newsletter-43)) and [AlphaFold](https://deepmind.com/blog/alphafold/) ([AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)) so frequently now that it no longer seems as impressive when a new one comes out. David on the other hand sounded less impressed by progress on Dota and StarCraft, since both AI systems were capable of executing actions that humans could never do (fast reaction times for Dota and high actions-per-minute for StarCraft). He also thought that these projects didn't result in any clear general algorithmic insights the way AlphaZero did. On the deep RL + robotics side, David identified major progress in [Dactyl](https://blog.openai.com/learning-dexterity/) ([AN #18](https://mailchi.mp/51717855ea27/alignment-newsletter-18)) and [QT-Opt](https://ai.googleblog.com/2018/06/scalable-deep-reinforcement-learning.html) (which I remember reading and liking but apparently I failed to put in the newsletter). He also cited GANs as having improved significantly, and talked about feature-wise transformations in particular. Roman noted the improving performance of evolutionary algorithms. David also noted how a lot of results were obtained by creating algorithms that could scale, and then using a huge amount of compute for them, quoting [AI and Compute](https://blog.openai.com/ai-and-compute/) ([AN #7](https://mailchi.mp/3e550712419a/alignment-newsletter-7)), [Interpreting AI Compute Trends](https://aiimpacts.org/interpreting-ai-compute-trends/) ([AN #15](https://mailchi.mp/4920e52dd61b/alignment-newsletter-15)) and [Reinterpreting AI and Compute](https://aiimpacts.org/reinterpreting-ai-and-compute/) ([AN #38](https://mailchi.mp/588354e4b91d/alignment-newsletter-38)). On the policy side, they talked about deep fakes and the general trend that AI may be progressing to fast for us to keep up with its security implications. They do find it promising that researchers are beginning to accept that their research does have safety and security implications. On the safety side, David noted that the main advance seemed to be with approaches using [superhuman feedback](https://www.lesswrong.com/posts/naccwaCQEEBXK7hiJ/my-use-of-the-phrase-super-human-feedback), including [debate](https://blog.openai.com/debate/) ([AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)), [iterated amplification](https://blog.openai.com/amplifying-ai-training/) (discussed frequently in this newsletter, but that paper was in [AN #30](https://mailchi.mp/c1f376f3a12e/alignment-newsletter-30)) and [recursive reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) ([AN #34](https://mailchi.mp/f1947668b183/alignment-newsletter-34)). He also identified [unrestricted adversarial examples](https://ai.googleblog.com/2018/09/introducing-unrestricted-adversarial.html) ([AN #24](https://mailchi.mp/d7b5059d64ed/alignment-newsletter-24)) as an area to watch in the future. **Rohin's opinion:** I broadly agree with the areas of AI progress identified here, though I would probably also throw in NLP, e.g. [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html). I disagree on the details -- for example, I think that [OpenAI Five](https://blog.openai.com/openai-five/) ([AN #13](https://mailchi.mp/8234356e4b7f/alignment-newsletter-13)) was much better than I would have expected at the time and the same would have been true of AlphaStar if I hadn't already seen OpenAI Five, and the fact that they did a few things that humans can't do barely diminishes the achievement at all. (My take is pretty similar to Alex Irpan's take in his [post on AlphaStar](https://www.alexirpan.com/2019/02/22/alphastar.html).) [Treacherous Turn, Simulations and Brain-Computer Interfaces](https://www.lesswrong.com/posts/pRaSFmjrgSr7YT6BG/treacherous-turn-simulations-and-brain-computer-interfaces) *(Michaël Trazzi)* ### **Learning human intent** [AI Alignment Podcast: Human Cognition and the Nature of Intelligence](https://futureoflife.org/2019/02/21/human-cognition-and-the-nature-of-intelligence-with-joshua-greene/) *(Lucas Perry and Joshua Greene)* (summarized by Richard): Joshua Greene's lab has two research directions. The first is how we combine concepts to form thoughts: a process which allows us to understand arbitrary novel scenarios (even ones we don't think ever occurred). He discusses some of his recent reseach, which uses brain imaging to infer what's happening when humans think about compound concepts. While Joshua considers the combinatorial nature of thought to be important, he argues that to build AGI, it's necessary to start with "grounded cognition" in which representations are derived from perception and physical action, rather than just learning to manipulate symbols (like language). Joshua also works on the psychology and neuroscience of morality. He discusses his recent work in which participants are prompted to consider Rawls' Veil of Ignorance argument (that when making decisions affecting many people, we should do so as if we don't know which one we are) and then asked to evaluate moral dilemmas such as trolley problems. Joshua argues that the concept of impartiality is at the core of morality, and that it pushes people towards more utilitarian ideas (although he wants to rebrand utilitarianism as "deep pragmatism" to address its PR problems). [Imitation Learning from Imperfect Demonstration](http://arxiv.org/abs/1901.09387) *(Yueh-Hua Wu et al)* [Learning User Preferences via Reinforcement Learning with Spatial Interface Valuing](http://arxiv.org/abs/1902.00719) *(Miguel Alonso Jr)* ### **Interpretability** [Regularizing Black-box Models for Improved Interpretability](http://arxiv.org/abs/1902.06787) *(Gregory Plumb et al)* ### **Robustness** [Adversarial Examples Are a Natural Consequence of Test Error in Noise](https://arxiv.org/abs/1901.10513) *(Nicolas Ford, Justin Gilmer et al)* (summarized by Dan H): While this was previously summarized in [AN #32](https://mailchi.mp/8f5d302499be/alignment-newsletter-32), this draft is much more readable. [Improving Robustness of Machine Translation with Synthetic Noise](https://arxiv.org/abs/1902.09508) *(Vaibhav, Sumeet Singh, Craig Stewart et al)* (summarized by Dan H): By injecting noise (such as typos, word omission, slang) into the training set of a machine translation model, the authors are able to improve performance on naturally occurring data. While this trick usually does not work for computer vision models, it can work for NLP models. [Push the Student to Learn Right: Progressive Gradient Correcting by Meta-learner on Corrupted Labels](https://arxiv.org/abs/1902.07379) *(Jun Shu et al)* ### **Miscellaneous (Alignment)** **[AI Safety Needs Social Scientists](https://blog.openai.com/ai-safety-needs-social-scientists/)** *(Geoffrey Irving et al)*: Summarized in the highlights! **AI strategy and policy** ========================== [Humans Who Are Not Concentrating Are Not General Intelligences](https://www.lesswrong.com/posts/4AHXDwcGab5PhKhHT/humans-who-are-not-concentrating-are-not-general) *(Sarah Constantin)*: This post argues that humans who skim the stories produced by [GPT-2](https://blog.openai.com/better-language-models/) ([AN #46](https://mailchi.mp/c48f996a5db5/alignment-newsletter-46)) would not be able to tell that they were generated by a machine, because while skimming we are not able to notice the obvious logical inconsistencies in its writing. Key quote: "OpenAI HAS achieved the ability to pass the Turing test against humans on autopilot". This suggests that fake news, social manipulation, etc. will become much easier. However, it might also force people to learn the skill of detecting the difference between humans and bots, which could let them learn to tell when they are actively focusing on something and are "actually learning" as opposed to skimming for "low order correlations". **Rohin's opinion:** I noticed a variant of this effect myself while reading GPT-2 results -- my brain very quickly fell into the mode of skimming without absorbing anything, though it felt more like I had made the evaluation that there was nothing to gain from the content, which seems okay if the goal is to avoid fake news. I also find this to be particularly interesting evidence about the differences between our low-level, effortless pattern matching, as well as our more effortful and accurate "logical reasoning". **Other progress in AI** ======================== ### **Exploration** [InfoBot: Transfer and Exploration via the Information Bottleneck](http://arxiv.org/abs/1901.10902) *(Anirudh Goyal et al)* ### **Reinforcement learning** [An Overdue Post on AlphaStar](https://www.alexirpan.com/2019/02/22/alphastar-part2.html) *(Alex Irpan)*: The [first post](https://www.alexirpan.com/2019/02/22/alphastar.html) in this two-parter talks about the impact of [AlphaStar](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) ([AN #43](https://mailchi.mp/768a8130013f/alignment-newsletter-43)) on the StarCraft community and broader public. I'm focusing on the second one, which talks about AlphaStar's technical details and implications. Some of this post overlaps with my summary of AlphaStar, but those parts are better fleshed out and have more details. First, imitation learning is a surprisingly good base policy, getting to the level of a Gold player. It's surprising because you might expect the [DAgger](https://www.ri.cmu.edu/pub_files/2011/4/Ross-AISTATS11-NoRegret.pdf) problem to be extreme: since there are so many actions in a StarCraft game, your imitation learning policy will make some errors, and those errors will then compound over the very long remainder of the episode as they take the policy further away from normal human play into states that the policy wasn't trained on. Second, population-based training is probably crucial and will be important in the future, because it allows for exploring the full strategy space. Third, the major challenge is making RL achieve okay performance, and after that they very quickly become great. It took years of research to get Dota and StarCraft bots reach decent play, and then a few days of more training got them to be world class. Fun quote: "although OpenAI’s DotA 2 agent lost against a pro team, [they were able to beat their old agent 80% of the time with 10 days of training](https://twitter.com/openai/status/1037765547427954688)". Fourth, there were a lot of research results that went into AlphaStar. This suggests that there are large gains to be had by throwing a lot of techniques together and seeing how well they work, which doesn't happen very much currently. There are good reasons for this: it's much easier to evaluate a technique if its built upon a simple, standard algorithm rather than having to consider all of its interactions with other techniques which you may or may not be able to properly compare against. Still, there are going to be some cool results that we could do now if we just threw the right things together, and this sort of work also lets us test techniques in new settings to see which ones actually work in general, as opposed to only in the original evaluation. **Rohin's opinion:** I really like this post, and agree with almost everything in it. On the imitation learning point, I also found it surprising how well imitation learning worked. Alex suggests that it could be that human data has enough variation that the agent can learn how to recover from incorrect decisions it could make. I think this is a partial explanation at best -- there is a huge combinatorial explosion, it's not clear why you don't need a much larger dataset to cover the entire space. Maybe there are "natural" representations in any realistic complex environment that you start to accurately learn at the level of compute that they're using, and once you have those then imitation learning with sufficient variation can work well. On the last point about tossing techniques together, I think this might sometimes be worth doing but often may not be. It makes sense to do this with any real task, since that's a test of the technique against reality. (Here StarCraft counts as a "real" task while Atari does not; the criterion is something like "if the task is successfully automated we are impressed regardless of how it is solved".) I'm less keen on tossing techniques together for artificial benchmarks. I think typically these techniques improve the sample efficiency by a constant multiplicative factor by adding something akin to a good inductive bias; in that case throwing them together may let us solve the artificial benchmark sooner but it doesn't give us great evidence that the "inductive bias" will be good for realistic tasks. I think I don't actually disagree with Alex very much on the object-level recommendations, I would just frame them differently. [Learning to Generalize from Sparse and Underspecified Rewards](http://ai.googleblog.com/2019/02/learning-to-generalize-from-sparse-and.html) *(Rishabh Agarwal et al)* [Reward Shaping via Meta-Learning](http://arxiv.org/abs/1901.09330) *(Haosheng Zou, Tongzheng Ren et al)* [Investigating Generalisation in Continuous Deep Reinforcement Learning](http://arxiv.org/abs/1902.07015) *(Chenyang Zhao et al)* ### **Deep learning** [Random Search and Reproducibility for Neural Architecture Search](http://arxiv.org/abs/1902.07638) *(Liam Li et al)* **News** ======== [MIRI Summer Fellows Program](http://rationality.org/workshops/apply-msfp) *(Colm Ó Riain)*: CFAR and MIRI are running the MIRI Summer Fellows Program from August 9-24. Applications are due March 31. [RAISE is launching their MVP](https://www.lesswrong.com/posts/WgnAEXw5fXaW9p5PS/raise-is-launching-their-mvp) *(Toon Alfrink)*: The Road to AI Safety Excellence will begin publishing lessons on inverse reinforcement learning and iterated amplification on Monday. They are looking for volunteers for their testing panel, who will study the material for about one full day per week, with guidance from RAISE, and provide feedback on the material and in particular on any sources of confusion.
3f8d363a-caf0-4089-84d2-6d07569c5e79
trentmkelly/LessWrong-43k
LessWrong
Half-assing it with everything you've got I hang out around a lot of effective altruists. Many of them are motivated primarily by something like guilt (for having great resource and opportunity while others suffer) or shame (for not helping enough). Hell, many of my non-EA friends are primarily motivated by guilt or shame. I'm not going to criticize guilt/shame motivation: I have this policy where, when somebody puts large amounts of effort or money towards making the world a better place, I try really hard not to condemn their motives. Guilt and shame may be fine tools for jarring people out of complacence. However, I worry that guilt and shame are unhealthy long-term motivators. In many of my friends, guilt and shame tend to induce akrasia, reduce productivity, and drain motivation. So over the next few weeks, I'll be writing a series of posts about removing guilt/shame motivation and replacing it with something stronger. 1 Say you're a college student, and you have a paper due. The quality of the paper will depend upon the amount of effort you put in. We'll say that you know the project pretty well: you can get an A with only moderate effort, and with significant effort you could produce something much better than the usual A-grade paper. The education environment implicitly attempts to convince students that their preferences point ever rightward along this line. Parents and teachers say things like "you should put in your best effort," and they heap shame upon people who don't strive to push ever rightward along the quality line. People generally react to this coercion in one of two ways. The first group (the "slackers") rejects the implication that quality=preferences. These are the people who don't care about the class, who complain constantly about the useless pointless work they have to do, who half-ass the assignment and turn in something that either barely passes or fails entirely. Slackers tend to resent the authority forcing them to write the paper. The second group (the "tryers") are the
daeb15f7-5158-4f71-89dc-99f38cc849f9
trentmkelly/LessWrong-43k
LessWrong
Resurrection of the dead via multiverse-wide acausual cooperation TL;DR: Measure decline in random mind creation may be prevented if we take into account very large number of random mids created in other universes. Summary: P.Almond suggested the idea of the resurrection of the dead via a quantum random generator which creates a random mind, but such an approach has several problems: non-human beings in our world, non-necessary suffering of non-perfect copies, and measure decline. Here I suggest three patches, which prevent most of the undesired effects: 1. Human mind matrix to prevent pure random minds appearing. 2. Digital immortality data to create a person which satisfies all known external expectations, and the use of randomness only to fill unknown information. 3. Multiverse-wide cooperation for the “cross-resurrection” of the dead between multiple worlds via quantum random minds, so the total measure of all resurrected people will not decline. 1. Introduction Almond in “Many-Worlds Assisted Mind Uploading: A Thought Experiment” suggested the following idea about the resurrection of the dead by the use of a quantum random generator, which would create a random mind within a computer (Almond, 2006): [A technician who lost someone’s brain scan file] writes a computer program which takes input from a physical system. The physical system, known as a quantum event generator, generates "1"s and "0"s randomly as a result of quantum events. The program will use the physical system to tell it what sequence of "1"s and "0"s will be used to try to recreate the lost scan file. The program starts with an empty scan file which will be filled with "1"s and "0"s. If the many-worlds interpretation of quantum mechanics is correct, all possible minds will appear in separate timelines starting from the moment of random mind creation, which would mean the resurrection of everyone from his own point of view. However, this approach will a) not help an outside observer, who wants to resurrect a relative, for instance, as the observer wou
dbbe7d18-6ca5-4f74-a6cc-e6a7104ffa4b
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
How might an "intelligence explosion" be dangerous? If programmed with the wrong motivations, a machine could be malevolent toward humans, and intentionally exterminate our species. More likely, it could be designed with motivations that initially appeared safe (and easy to program) to its designers, but that turn out to be best fulfilled (given sufficient power) by reallocating resources from sustaining human life to [other projects](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf). As Yudkowsky [writes](https://intelligence.org/files/AIPosNegFactor.pdf), “the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Since weak AIs with many different motivations could better achieve their goal by faking benevolence until they are powerful, safety testing to avoid this could be very challenging. Alternatively, competitive pressures, both economic and military, might lead AI designers to try to use other methods to control AIs with undesirable motivations. As those AIs became more sophisticated this could eventually lead to one risk too many. Even a machine successfully designed with superficially benevolent motivations could easily go awry when it discovers implications of its decision criteria unanticipated by its designers. For example, a superintelligence programmed to maximize human happiness might find it easier to rewire human neurology so that humans are happiest when sitting quietly in jars than to build and maintain a utopian world that caters to the complex and nuanced whims of current human neurology.
8a05de52-88a6-42f4-a8ed-aa430c6941d2
trentmkelly/LessWrong-43k
LessWrong
The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument (This essay was written as part of my university philosophy course. I  added Section 5 to the present version.) 1 Introduction In Fading Qualia, Absent Qualia and Dancing Qualia (1995), Chalmers argues explicitly for the following central theses: CT1: The Principle of Organisational Invariance is the correct view of how qualia depend on physical properties.   CT2: Absent qualia are empirically impossible.   And implicitly commits himself to the third thesis: CT3: Artificial Consciousness is empirically possible.  He argues in favour of CT2  through a reductio ad absurdum (the Fading Qualia argument) and then establishes CT1  as an Inference to the Best Explanation of the reductio. Finally, CT3  follows as a consequence of CT2.  After presenting Chalmers’ reasoning I will argue for two main theses:  M1: The Fading Qualia argument is wrong. M2: We should lower our credence in the empirical possibility of Artificial Consciousness.  I will argue for M1 based on the observation that Chalmers’ argument is question-begging: it assumes its own conclusions in order to get the reductio off the ground. I use this observation to develop the Biology Argument which directly argues against the possibility of replicating consciousness in silicon, contesting CT1 and CT2. After refuting objections to the observation and the Biology Argument, I argue that M2 is a consequence of my arguments for M1. Since Chalmers’ arguments are some of the very few in the literature that formulate an argument in favour of Artificial Consciousness, this paper attempts to execute the crucial task of evaluating their success.    1.1 Defining Qualia  ‘Qualia’ denote subjective experiences like seeing the colour red or smelling cologne. In Thomas Nagel’s words (1974), there is something it is like to have these experiences, they have a concrete phenomenal character such that seeing the colour red is subjectively very different from seeing the colour green. The word ‘quale’ (synonymou
9db13a33-37d3-4085-b24a-2f7696d744ed
trentmkelly/LessWrong-43k
LessWrong
Fraternal Birth Order Effect and the Maternal Immune Hypothesis TLDR; The Bogaert et all (2017) study which claims to find evidence that a particular protein and corresponding antibodies contribute to the FBOE has a significant problem which contradicts the main claim. (I had investigated the below a few years ago but didn’t publish anything at the time. Scott referenced the NLGN4Y study in his recent post and I thought people might find this an interesting addition on the topic. I am not sure I’m not missing or misunderstanding something but I present it to the best of my understanding. Corrections strongly welcome.) Hypothesis summary The fraternal birth order effect (FBOE) is the effect that having more older brothers increases the likelihood that a later born son will be homosexual. Having older sisters may have an additional effect.  There is some direct and indirect evidence that the effect occurs prenatally.  One of the theories which has been proposed for this is that during a mother’s pregnancy with older brothers she develops an immune reaction to male foetus proteins which effects the development of later male foetuses (“Maternal Immune Hypothesis”, MIH). Claim Bogaert et all (2017) investigated a couple of Y-linked proteins to test whether one of them might be the cause of the FBOE. They claim promising results for one protein in particular. > … after statistically controlling for number of pregnancies, mothers of gay sons, particularly those with older brothers, had significantly higher anti-NLGN4Y levels than did the control samples of women, including mothers of heterosexual sons. The results suggest an association between a maternal immune response to NLGN4Y and subsequent sexual orientation in male offspring. And > In line with the maternal immune hypothesis, the predicted ordering for antibody concentrations to anti-NLGN4Y was as follows: women with no sons < mothers of heterosexual sons < mothers of gay sons with no older brothers < mothers of gay sons with older brothers. Initially this result seem
fa9fdfea-5b0e-4ca7-9acc-ba1f20323875
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
DeepMind alignment team opinions on AGI ruin arguments We had some discussions of the [AGI ruin arguments](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) within the DeepMind alignment team to clarify for ourselves which of these arguments we are most concerned about and what the implications are for our work. This post summarizes the opinions of a subset of the alignment team on these arguments. **Disclaimer**: these are our own opinions that do not represent the views of DeepMind as a whole or its broader community of safety researchers. This doc shows opinions and comments from 8 people on the alignment team (without attribution). For each section of the list, we show a table summarizing agreement / disagreement with the arguments in that section (the tables can be found in this [sheet](https://docs.google.com/spreadsheets/d/e/2PACX-1vT4sNsGF75VdV-d6li33edmP-yM6I-RDyXKDYc-9PMVhyd_Bj8zPRdPaQqVuPlnDEqisP8HhDyhuVhG/pubhtml)). Each row is sorted from Agree to Disagree, so a column does not correspond to a specific person. We also provide detailed comments and clarifications on each argument from the team members.  For each argument, we include a shorthand description in a few words for ease of reference, and a summary in 1-2 sentences (usually copied from the bolded parts of the original arguments). We apologize for some inevitable misrepresentation of the original arguments in these summaries. Note that some respondents looked at the original arguments while others looked at the summaries when providing their opinions (though everyone has read the original list at some point before providing opinions).  A general problem when evaluating the arguments was that people often agreed with the argument as stated, but disagreed about the severity of its implications for AGI risk. A lot of these ended up as "mostly agree / unclear / mostly disagree" ratings. It would have been better to gather two separate scores (agreement with the statement and agreement with implications for risk). Summary of agreements, disagreements and implications ===================================================== **Most agreement**: * **Section A ("strategic challenges"):** #1 (human level is nothing special), #2 (unaligned superintelligence could easily take over), #8 (capabilities generalize out of desired scope) * **Section B1 (distributional leap):**#14 (some problems only occur in dangerous domains) * **Section B2 (outer/inner alignment):**#16 (inner misalignment), #18 (no ground truth), #23 (corrigibility is anti-natural) * **Section B3 (interpretability):** #28 (large option space) * **Section B4 (miscellaneous):** #36 (human flaws make containment difficult) **Most disagreement**: * **#6 (pivotal act is necessary).**We think it's necessary to end the acute risk period, but don't agree with the "pivotal act" framing that assumes that the risk period is ended through a discrete unilateral action by a small number of actors. * **#24 (sovereign vs corrigibility).** We think this kind of equivocation isn't actually happening much in the alignment community. Our work focuses on building corrigible systems (rather than sovereigns), and we expect that the difficulties of this approach could be surmountable, especially if we can figure out how to avoid building arbitrarily consequentialist systems. * **#39 (can't train people in security mindset).** Most of us don't think it's necessary to generate all the arguments yourself in order to make progress on the problems. * **#42 (there's no plan).**The kind of plan we imagine Eliezer to be thinking of does not seem necessary for a world to survive. **Most controversial**among the team: * **Section A ("strategic challenges"):**#4 (can't cooperate to avoid AGI), #5 (narrow AI is insufficient), #7 (no weak pivotal acts), #9 (pivotal act is a dangerous regime) * **Section B1 (distributional leap):** #13 and 15 (problems above intelligence threshold and correlated capability gains) * **Section B2 (outer/inner alignment):**#17 (inner properties), #21 (capabilities go further), #22 (simple alignment core) * **Section B3 (interpretability):** #30 (powerful vs understandable), #32 (language is insufficient), #33 (alien concepts) * **Section B4 (miscellaneous):** #35 (multi-agent is single-agent) * **Section C ("civilizational inadequacy"):** #38 (lack of focus), #41 (have to write this list), #43 (unawareness of risks) **Cruxes** from the most controversial arguments: * How powerful does a system / plan need to be to save the world? * Is global cooperation sufficiently difficult that AGI would need to deploy new powerful technology to make it work? * Will we know how capable our models are? * Will capabilities increase smoothly? * Will systems acquire the capability to be useful for alignment / cooperation before or after the capability to perform advanced deception? * Is consequentialism a powerful attractor? How hard will it be to avoid arbitrarily consequentialist systems? * What is the overhead for aligned vs not-aligned AGI? **Possible implications** for our work: * Work on cooperating to avoid unaligned AGI (compute governance, publication norms, demonstrations of misalignment, etc) * Investigate techniques for limiting unaligned consequentialism, e.g. process-based feedback, limiting situational awareness, limited domains * Work on improving capability monitoring and control * Empirically investigate to what extent selection for undetectability / against interpretability occurs in practice as systems become more capable * Continue and expand our work on mechanistic interpretability and process-based feedback Section A (shorthand: "strategic challenges") ============================================= Summary ------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676482272/mirroredImages/qJgz2YapqpFEDTLKn/fvyobclfwdwpcmktnnra.png) Detailed comments ----------------- **#1. Human level is nothing special / data efficiency** **Summary:**AGI will not be upper-bounded by human ability or human learning speed (similarly to AlphaGo).  Things much smarter than human would be able to learn from less evidence than humans require. * Agree (though don't agree with the implication that it will be discontinuous) * Agree (strongly, and possibly a major source of disagreement with broader ML community) **#2. Unaligned superintelligence could easily take over** **Summary:**A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure. * Agree (including using human institutions and infrastructure for its own ends) * Agree ("sufficiently high" is doing a lot of the work here) * Mostly agree - it depends a lot on what "medium-bandwidth" means, and also how carefully the system is being monitored (e.g. how we're training the system online). I think "text-only channel, advising actions of a lot of people who tend to defer to the system" seems like probably enough -> takeover if we're not being that careful. But I think I probably disagree with the mechanism of takeover intended by Yudkowsky here. **#3. Can't iterate on dangerous domains** **Summary:**At some point there will be a 'first critical try' at operating at a 'dangerous' level of intelligence, and on this 'first critical try', we need to get alignment right.  * (x2) Mostly agree (misleading, see [Paul's Disagreement #1](https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer#Disagreements)) * Mostly agree. It's not clear that attempting alignment and failing will necessarily be as irrecoverable as unaligned operation, but does seem very likely. If "dangerous" just means failing implies extinction then this statement is a truism. * Mostly agree (though "get alignment right on the first critical try" may lean heavily on "throttle capability until we get alignment right") **#4. Can't cooperate to avoid AGI** **Summary:**The world can't just decide not to build AGI. * Unclear. I think this depends a lot on what exactly happens in the real world in the next decade or so, and we might be able to buy time. * Unclear. Agree that worldwide cooperation to avoid AGI would be very difficult, but cooperation between Western AI labs seems feasible and could potentially be sufficient to end the acute risk period or buy more time (especially under short timelines). * Unclear. I agree it is very hard and unlikely, but extreme things can happen in global politics given sufficiently extreme circumstances, and cooperation can do much to shape the pace and direction of tech development. * Mostly disagree (more on principle than via seeing a stable solution; agree that "just" deciding not to build AGI doesn't work, but culture around AI could be shifted somehow) **#5. Narrow AI is insufficient** **Summary:**We can't just build a very weak system. * Agree (assuming #4) * Agree (this doesn't end the risk period) * Disagree (can potentially use narrow AI to help humans cooperate) * (x2) Disagree (more on principle - we should work on how to solve xrisk using somewhat-narrow systems; to be clear, this a problem to be solved, rather than something we "just do") **#6. Pivotal act is necessary** **Summary:**We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world. * Mostly agree (but disagree connotationally, in that "act" sounds like a single fast move, while an understandable human-timescale strategy is probably enough) * Unclear (it is necessary to end to acute risk period but it can be done by humans rather than an AGI) * Unclear (dislike framing; seems tied up with ability to cooperate) * Disagree if a pivotal act is a "discrete, powerful, gameboard-flipping action" (as opposed to something that ends the acute risk period) * Disagree (strongly disagree with pivotal act. Briefly, pivotal acts seem like the wrong framing. + It seems like "get people to implement stronger restrictions" or "explain misalignment risks" or "come up with better regulation" or "differentially improve alignment" are all better applications of an AGI than "do a pivotal act". + The pivotal act frame seems to be "there will be a tiny group of people who will have responsibility for saving the world" but the reality seems to if anything be closer to the opposite: there will be a tiny group of people that wants to build a tremendously ambitious (and thus also dangerous) AGI, the vast majority of the world would be in support of \_not\_ building such an AGI (and instead building many more limited systems, which can still deliver large amounts of wealth and/or value), and some set of people representing the larger global population in getting people to not build/deploy said dangerous AGI. This is an extension of the view that "AIs should be a major influence on large sectors of society" is probably a fairly unpopular view today already. **#7. There are no weak pivotal acts because a pivotal act requires power** **Summary:**It takes a lot of power to do something to the current world that prevents any other AGI from coming into existence; nothing which can do that is passively safe in virtue of its weakness.  * Agree (if by pivotal act we mean "discrete, powerful, gameboard-flipping action") * Agree (if the bar is "prevent any other AGI from coming into existence") * Agree (with caveats about cooperation) * Disagree. This may have technical/engineering solutions that don't involve high general-purpose agency, or may not require deploying narrow AI at all. **#8. Capabilities generalize out of desired scope** **Summary:**The best and easiest-found-by-optimization algorithms for solving problems we want an AI to solve, readily generalize to problems we'd rather the AI not solve. * Agree (conditional on #5) **#9. A pivotal act is a dangerous regime** **Summary:**The builders of a safe system would need to operate their system in a regime where it has the capability to kill everybody or make itself even more dangerous, but has been successfully designed to not do that. * Agree (conditional on #5) * Agree (if by pivotal act we mean "discrete, powerful, gameboard-flipping action") * Mostly disagree (because of underlying pivotal act framing). But agree that ML systems will realistically be operating in dangerous regimes. I agree with "Running AGIs doing something pivotal are not passively safe". I’d go further and state that it's likely people will run AGIs doing non-pivotal acts which are nonetheless unsafe. However, I disagree with the (I believe implied) claim that "We should be running AGIs doing something pivotal" (under the author's notion of "pivotal"). * Disagree (human cooperation or humans assisted by narrow AI could end the acute risk period without an AI system having dangerous capabilities) Section B.1: The distributional leap ==================================== Summary ------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676482272/mirroredImages/qJgz2YapqpFEDTLKn/zhj63i2i9g4bn3igbt3q.png) Detailed comments ----------------- **#10. Large distributional shift to dangerous domains** **Summary:**On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions. **#11. Sim to real is hard** **Summary:**There's no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world. * Agree (seems like an instance of #7) * Unclear. Agree that for many important tasks we aren't going to train in safe environments with millions of runs, and in particular not simulated environments, but disagree with underlying pivotal act frame * Mostly disagree (debate with interpretability could achieve this if it succeeds) **#12. High intelligence is a large shift** **Summary:**Operating at a highly intelligent level is a drastic shift in distribution from operating at a less intelligent level. * Mostly disagree (this relies on the sharp left turn, which doesn't seem that likely to me) **#13. Some problems only occur above an intelligence threshold** **Summary:**Many alignment problems of superintelligence will not naturally appear at pre-dangerous, passively-safe levels of capability.  * Mostly agree - I think many alignment problems will appear before, and also many won't appear until later (or at least, differences-in-degree will become differences-in-kind) * Not sure (agree some problems would \*naturally\* first arise for higher intelligence levels, but we can seek out examples for less intelligent systems, e.g. reward tampering & goal misgeneralization examples) * Mostly disagree (we'll get demos of problems; Eliezer seems to think this will be hard / unlikely to help though doesn't say that outright, if so I disagree with that) **#14. Some problems only occur in dangerous domains** **Summary:**Some problems seem like their natural order of appearance could be that they first appear only in fully dangerous domains. * Mostly agree (misleadingly true: we can create analogous examples before the first critical try) * Mostly agree (but that doesn't mean we can't usefully study them in safe domains) **#15. Capability gains from intelligence are correlated** **Summary:**Fast capability gains seem likely, and may break lots of previous alignment-required invariants simultaneously.  * Agree (in particular, once we have ~human-level AI, both AI development and the world at-large probably get very crazy very fast) * Mostly agree (this is my guess about the nature of intelligence, but I'm not sure I'm right) * Mostly agree, fast need not imply discontinuous * Unclear (disagree on fast capability gains being likely, agree on breaking invariants given fast gains) Section B.2: Central difficulties of outer and inner alignment. =============================================================== Summary ------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676482272/mirroredImages/qJgz2YapqpFEDTLKn/ug1i3c0q3tn0herhxm3h.png) Detailed comments ----------------- **#16. Inner misalignment** **Summary:**Outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction. * Unclear - I strongly agree with the weaker claim that we don't get inner alignment for free, but the claim here seems more false than not? Certainly, outer optimization on most loss functions will lead to \_more\_ inner optimization in that direction (empirically and theoretically) **#17. Can't control inner properties** **Summary:**On the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they're there, rather than just observable outer ones you can run a loss function over. * Agree (at least currently, the requisite interpretability capabilities and/or conceptual understanding of goal-directedness seems inadequate) * Mostly agree (though uncertain how much this is about the optimization paradigm and how much it's about interpretability) * Mostly agree (counter: current interpretability) * (x2) Mostly disagree (interpretability could address this) **#18. No ground truth (no comments)** **Summary:**There's no reliable Cartesian-sensory ground truth (reliable loss-function-calculator) about whether an output is 'aligned'. **#19. Pointers problem** **Summary:** There is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment. * Unclear - agreed that we have no principled way of doing this, but we also don't have great examples of this not working, so depends on how strongly this is intended. I don't think "high confidence that this won't work" is justified. * Agree (modulo the [shard theory objection](https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer?commentId=YJGedmBCPSrGmTmF4) to the strict reading of this) **#20. Flawed human feedback** **Summary:** Human raters make systematic errors - regular, compactly describable, predictable errors.  * Mostly agree (misleading, a major hope is to have your oversight process be smarter than the model, so that its systematic errors are not ones that the model can easily exploit) * Unclear (agree denotationally, unclear whether we can build enough self-correction on the most load-bearing parts of human feedback) **#21. Capabilities go further** **Summary:**Capabilities generalize further than alignment once capabilities start to generalize far. * Agree (seems similar to #8 and #15) * Mostly agree (by default; but I think there's hope in the observation that generalizing alignment is a surmountable problem for humans, so it might also be for AGI) **#22. No simple alignment core** **Summary:**There is a simple core of general intelligence but there is no analogous simple core of alignment. * Mostly agree (but something like "help this agent" seems like a fairly simple core) * Unclear (there may exist a system that has alignment as an attractor) **#23. Corrigibility is anti-natural.** **Summary:** Corrigibility is anti-natural to consequentialist reasoning. * Mostly agree (misleading, I agree with [Paul's comment](https://www.lesswrong.com/posts/AqsjZwxHNqH64C2b6/let-s-see-you-write-that-corrigibility-tag?commentId=8kPhqBc69HtmZj6XR) on "Let's see you write that corrigibility tag") * Mostly agree (I think we can avoid building arbitrarily consequentialist systems) * Agree with the statement that corrigibility is anti-natural to consequentialist reasoning. Yudkowsky's view seems to be that everything tends towards \*pure\* consequentialism, and I disagree with that. **#24. Sovereign vs corrigibility** **Summary:**There are two fundamentally different approaches you can potentially take to alignment [a sovereign optimizing CEV or a corrigible agent], which are unsolvable for two different sets of reasons. Therefore by ambiguating between the two approaches, you can confuse yourself about whether alignment is necessarily difficult. * Unclear. Agree in principle (but disagree this is happening that much) * Mostly disagree (agree these are two distinct approaches that should not be confused, disagree that people are confusing them or that they are unsolvable) * Mostly disagree - I agree that ambiguating between approaches is bad, but am not sure who/what that refers to, and there seems to be some implicit "all the approaches I've seen are non-viable" claim here, which I'd disagree with Section B.3:  Central difficulties of sufficiently good and useful transparency / interpretability. =================================================================================================== Summary ------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676482272/mirroredImages/qJgz2YapqpFEDTLKn/p98aedbvbaj6iii0rnix.png) Detailed comments ----------------- **#25. Real interpretability is out of reach** **Summary:**We've got no idea what's actually going on inside the giant inscrutable matrices and tensors of floating-point numbers.  * Agree - interpretability research has made impressive strides, and more than I expected in 2017, but is still a far shot from understanding most of what's happening inside the big mass of matmuls * Mostly agree (misleading, we plausibly do better in the future) * Mostly agree (we know a little now and not sure if we'll know more or less later) * Unclear (agree we don't have much idea right now, but I think we can develop better understanding, main uncertainty is whether we can do this in time) **#26. Interpretability is insufficient** **Summary:**Knowing that a medium-strength system of inscrutable matrices is planning to kill us, does not thereby let us build a high-strength system that isn't planning to kill us. * Mostly agree (misleading). A major hope is that (by using interpretability to carry out oversight) you simply don’t get the medium strength system that is planning to kill you. * Mostly agree. Interpretability is helpful for cooperation, and also helps to train a system that doesn't kill you if #27 doesn't hold. * Mostly agree - on it's own, seems correct, but the tone also understates that it would \*massively\* change the position we are in. If interpretability conclusively revealed that one of the existing prototypes was trying to kill us, that would dramatically change the {ML, global, AGI lab} conversation around xrisk/alignment * Mostly agree. Agree denotationally, disagree that this capability wouldn't be super useful **#27. Selecting for undetectability** **Summary:**Optimizing against an interpreted thought optimizes against interpretability. * Agree - important point to be careful about * Mostly agree (misleading, written to suggest that you definitely get the deceptive model, instead of it being unclear whether you get the deceptive or aligned model, which is the actually correct thing) * Not sure (good to be on the lookout for this, but not sure how much it's an issue in practice) **#28. Large option space (no comments)** **Summary:**A powerful AI searches parts of the option space we don't, and we can't foresee all its options. **#29. Real world is an opaque domain** **Summary:**AGI outputs go through a huge opaque domain before they have their real consequences, so we cannot evaluate consequences based on outputs.  * Agree (strongly) - seems particularly important when there are consequences we cannot easily observe/attribute (which seems like the rule in many consequential domains, rather than the exception) * Mostly agree (we can't just evaluate consequences directly, but may be able to do it by evaluating reasoning) **#30. Powerful vs understandable** **Summary:**No humanly checkable output is powerful enough to save the world. * Disagree (verification easier than generation) * Disagree (due to pivotal act framing) **#31. Hidden deception** **Summary:**You can't rely on behavioral inspection to determine facts about an AI which that AI might want to deceive you about. * Agree (for highly intelligent AI) * Agree (but with interpretability tools we don't have to be restricted to behavioral inspection) * Agree (I think?) E.g. I agree that we don't have good interpretability ways to check for "what the AI wants" or "what subgoals are relevant here" which is a particularly consequential question where AIs might be deceptive * Unclear (depends on the system's level of situational awareness) **#32. Language is insufficient or unsafe** **Summary:** Imitating human text can only be powerful enough if it spawns an inner non-imitative intelligence. * Mostly agree (misleading, typical plans don't depend on an assumption that it's "imitating human thought") * Unclear - depends a lot on particular definitions **#33. Alien concepts** **Summary:**The AI does not think like you do, it is utterly alien on a staggering scale. * Agree (for highly intelligent AI) * Unclear (depends on details about the AI) * Unclear - don't think we know much about how NNs work, and what we do know is ambiguous, though agree with #25 above * Disagree (natural abstraction hypothesis seems likely true) Section B.4:  Miscellaneous unworkable schemes. =============================================== Summary ------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676482272/mirroredImages/qJgz2YapqpFEDTLKn/sl5mxwjrbcu1cxlkimxr.png) Detailed comments ----------------- **#34. Multipolar collusion** **Summary:**Humans cannot participate in coordination schemes between superintelligences. * Unclear (unconvinced this is an unusually-hard subcase of corrigibility for the AI we'd use to help us) **#35. Multi-agent is single-agent** **Summary:** Any system of sufficiently intelligent agents can probably behave as a single agent, even if you imagine you're playing them against each other. * Agree (at a sufficiently high level of intelligence I find it hard to imagine them playing any game we intend) * Unclear (disagree if this is meant to apply to debate, see [Paul's Disagreement #24](https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer#Disagreements)) **#36. Human flaws make containment difficult (no comments)** **Summary:**Only relatively weak AGIs can be contained; the human operators are not secure systems. Section C (shorthand: "civilizational inadequacy") ================================================== Summary ------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676482272/mirroredImages/qJgz2YapqpFEDTLKn/q7zs8iobdhg3wbhrj6wq.png) Detailed comments ----------------- **#37. Optimism until failure** **Summary:** People have a default assumption of optimism in the face of uncertainty, until encountering hard evidence of difficulty. * Mostly disagree (humanity seems pretty risk-averse generally, see FDA and other regulatory bodies, or helicopter parenting) **#38. Lack of focus on real safety problems** **Summary:**AI safety field is not being productive on the lethal problems. The incentives are for working on things where success is easier. * (x2) Unclear / can't evaluate (depends on field boundaries) **#39. Can't train people in security mindset** **Summary:**This ability to "notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them" currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others. * Unclear (this isn't making a claim about whether others have this machinery or are better at training?) * Disagree (seems wild to imagine that progress can only happen if someone came up with all the arguments themselves; this seems obviously contradicted by any existing research field) **#40. Can't just hire geniuses to solve alignment** **Summary:**You cannot just pay $5 million apiece to a bunch of legible geniuses from other fields and expect to get great alignment work out of them.  * Not sure (what have we tried?) **#41. You have to be able to write this list** **Summary:**Reading this document cannot make somebody a core alignment researcher, you have to be able to write it. * Mostly agree (but I think it can be one of a suite of things that does produce real alignment research) * Disagree (I think I both could have and often literally have written the arguments on this list that I agree with; it just doesn't seem like a particularly useful document to me relative to what already existed, except inasmuch as it shocks people into action) **#42. There's no plan** **Summary:**Surviving worlds probably have a plan for how to survive by this point. * Unclear (don't know how overdetermined building dangerous AGI is) * Disagree (depends on what is meant by a "plan", but either I think there's a plan, or I think many surviving worlds don't have a plan) **#43. Unawareness of the risks** **Summary:**Not enough people have noticed or understood the risks. * Mostly agree, especially on understanding * Disagree (you basically have to disagree if you have lower p(doom), there's not really an argument to respond to though)
9788dc39-d6d8-4487-9a11-27a55e436fd3
trentmkelly/LessWrong-43k
LessWrong
[AN #66]: Decomposing robustness into capability robustness and alignment robustness Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Starting this week, we have a few new summarizers; you can always find the whole team here. I (Rohin) will continue to edit all of the summaries and opinions, and add some summaries and opinions of my own. Audio version here (may not be up yet). Highlights 2-D Robustness (Vladimir Mikulik) (summarized by Matthew): Typically when we think about machine learning robustness we imagine a scalar quantity representing how well a system performs when it is taken off its training distribution. When considering mesa optimization (AN #58), it is natural to instead decompose robustness into two variables: robust capabilities and robust alignment. When given an environment that does not perfectly resemble its training environment, a mesa optimizer could be dangerous by competently pursuing a mesa objective that is different from the loss function used during training. This combination of robust capabilities without robust alignment is an example of a malign failure, the most worrisome outcome of creating a mesa optimizer. Matthew's opinion: Decomposing robustness in this way helps me distinguish misaligned mesa optimization from the more general problem of machine learning robustness. I think it's important for researchers to understand this distinction because it is critical for understanding why a failure to solve the robustness problem could plausibly result in a catastrophe rather than merely a benign capabilities failure. Rohin's opinion: I strongly agree with this distinction, and in fact when I think about the problem of mesa optimization, I prefer to only think about models whose capabilities are robust but whose objective or goal is not, rather than considering the internals of the model and whether or not it is perf
d692b480-63a2-4dc8-8be7-a18df91f8457
StampyAI/alignment-research-dataset/blogs
Blogs
Methods of prompt programming --- Table of Contents* [The reverse-engineered dynamics of language](#the-reverse-engineered-dynamics-of-language) * [Task specification strategies](#task-specification-strategies) + [Direct](#direct) + [By proxy](#by-proxy) + [By demonstration (n-shot)](#by-demonstration-n-shot) * [Constraining behavior](#constraining-behavior) * [Serializing reasoning](#serializing-reasoning) + [Avoiding rationalization](#avoiding-rationalization) + [Parsing](#parsing) * [Metaprompts](#metaprompts) + [Metaprompt demonstrations](#metaprompt-demonstrations) * [Open-ended tasks](#open-ended-tasks) + [Creative composition](#creative-composition) + [Idea generation](#idea-generation) + [Simulations / emulations / games](#simulations--emulations--games) * [Debugging](#debugging) + [Few-shot bugs](#few-shot-bugs) + [Repetition](#repetition) + [BPEs](#bpes) + [Monitoring correct answer likelihood](#monitoring-correct-answer-likelihood) * [External links](#external-links) --- *This post was initially adapted from the second half of [*Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm*](https://arxiv.org/abs/2102.07350).* --- *Updates* *11/18/21: Corrected a [mistake](#fnref:3) and added a corresponding [footnote](#fn:3) about humanlike math errors. Thanks to Igor O. for pointing out the oversight.* --- > #### Like programming, but more fluid. You're not programming a computer, you're writing reality. It's strange. It's always different. It's never the same twice. > > – GPT-3 > > Prompt engineering for language models evokes the designation of *natural language programming*. Natural language, however, is [indeterministic](/posts/language-models-are-multiverse-generators/) and far more entangled and elusive in its interpretation than conventional programming languages. A successful methodology of prompt programming must import knowledge and perspectives from linguistics and communication as much as from computer science or machine learning, because **language models are the offspring of the sum of all human linguistic output**. While it has been often noted that GPT-3’s qualitative and [quantitative](http://gptprompts.wikidot.com/linguistics:word-in-context) performance depends drastically on prompting particulars, prompt programming is not yet a formal field of research. As far as I’m aware, [*Beyond the Few-Shot Paradigm*](https://arxiv.org/abs/2102.07350) is the only formal publication so far about prompt *programming* for language models. Instead, successful prompt programming techniques have primarily been shared on blogs and social media among users of OpenAI’s API and AI Dungeon. Gwern’s [*GPT-3 Creative Fiction*](https://www.gwern.net/GPT-3#) is by far the most comprehensive source of prompt programming wisdom and examples, and anyone who is sufficiently interested in the topic to be reading this is urged to read that as well. This page is a repository of methods, observations, and conceptualizations that I have found useful for crafting effective prompts. My experience derives primarily from interacting with GPT-3, but the overarching framework discussed here should apply to prompting any autoregressive language model trained on a diverse human-written corpus. The reverse-engineered dynamics of language ------------------------------------------- (**Warning: metaphysical rambling. Start [here](#ground) for pragmatic stuff.**) > Well, give or take some details, it’s based on the program I wrote for generating random stories in the mythos of the book. I reverse-engineered the text files, and wrote a program to create newer and more interesting stories based on them. In a way, this program is the next step in my search for knowledge. Knowledge, after all, always takes the form of a story. > > – GPT-3 > > GPT-3 was trained in a self-supervised setting on hundreds of gigabytes of natural language. Self-supervision is a form of unsupervised learning where ground truth labels are derived from the data itself. In the case of a language model, the ground truth assigned to each example was simply the token that came next in the original source. So the *ground truth function* which GPT-3 approximates is the underlying dynamic that determined what tokens came next in the original source. This function, unlike GPT-3, is not a black box, but is astronomically complex: it is the function that generated the words recorded by humans in books, articles, blogs, and internet comments. But wait - those weren’t generated by a single function. They were written by many different people in different contexts, some in different centuries. The only “function” that could be said to have generated them all is the time evolution operator for the entire system of reality. That’s not what we’re talking about, is it? Well, not quite. The ground truth function GPT-3 was trained on isn’t the Hamiltonian for the universal wavefunction, although it is constructed from sampling that function. If a neural network learns to play tic-tac-toe from seeing examples of games played by humans, we do not expect it to learn a theory of mind, even though minds generated the ground truth labels. A much simpler function suffices to model the game. Human minds emulate this function to assign labels, but the function itself doesn’t require reference to the mind to be completely described. So, it’s possible that the function to predict language is simpler than the evolution-rule of the universe which caused the assignment of labels[1](#fn:1). But unlike tic-tac-toe, language is not a self-contained game that can be abstracted from the rest of reality. Quite the contrary: humans use language to refer to all aspects of reality that we can articulate, and when aspects arise that language doesn’t allow us to articulate, we invent new language. A novel may attempt to represent psychological states with arbitrarily fidelity, and scientific publications describe models of reality on all levels of abstraction. Language is by far the most encompassing, intricate mirror of our internal workings that our species has externalized. > Natural language essentially encodes information about the world—the entire world, not just the world of the Goban, in a much more expressive way than any other modality ever could. > > – Leo Gao, [Building AGI Using Language Models](https://bmk.sh/2020/08/17/Building-AGI-Using-Language-Models/) > > A system which predicts the dynamics of language to arbitrary accuracy *does* require a theory of mind(s) and a theory of the worlds in which the minds are embedded. The dynamics of language do not float free from cultural, psychological, or physical context; the model must predict how language is actually used, which includes (say) predicting a conversation between theoretical physicists or detailed accounts of the effects of perturbations on an environment. Modeling language is as difficult as modeling every aspect of reality that could saliently influence the flow of language. This is why [Turing proposed](https://academic.oup.com/mind/article/LIX/236/433/986238) a conversation as the benchmark for intelligence instead of another difficult-seeming task like chess or proving theorems. A conversation can probe any aspect of a participant’s world model and demand arbitrary self-reflection. An algorithm which is indistinguishable from a human in *any* conversation must be broadly robust in the dimensions of intelligence we consider significant to humans. Having now speculated about what the ground truth function might *entail*, let’s move forward with a somewhat hand-wavy statement of the function and think about how it can inform prompt programming. **Ground truth: The next token of a sequence, given that it was authored by human(s)[2](#fn:2)** GPT-3 has not learned the ground truth function perfectly, obviously, or else the world would look very different by now. However, it has approximated it to a notable extent, as evidenced by its ability to not only form grammatical sentences, but also coherently employ cultural references and model complex psychological and physical contexts. The problem of prompt programming, then, is quite nontrivial, for the dynamics of language (or an approximation thereof on GPT-3’s level of sophistication) are quite nontrivial. If we were to predict how a given passage of text would continue given that a human had written it, we would need to model the intentions of its writer and consult worldly knowledge about its referents. The inverse problem of searching for a prompt that would produce a type of continuation involves the same considerations: like the art of persuasion, it necessarily invokes high-level, mentalistic concepts like tone, implication, association, meme, style, plausibility, and ambiguity. This motivates an anthropomorphic approach to prompt programming, since modelling how GPT-3 will respond to a prompt involves modelling virtual human writer(s). Note that an anthropomorphic approach is not the same as *anthropomorphizing the model*. GPT-3’s dynamics entail sophisticated predictions of humans, but it behaves unlike a human in significant ways. Three aspects which have stuck out to me are: 1. Its distribution of behaviors in response to a given prompt resembles not a single human author but a superposition of authors, which motivates a [subtractive approach](#constraining-behavior) to prompt programming. 2. It is constrained in predicting dynamics in situations where a substantial amount of silent reasoning happens between tokens, such as the transition between a tricky closed-ended question and its solution. This limitation can be overcome to a yet-unknown extent by prompting strategies that [extend the window of deliberation](#serializing-reasoning). 3. Its *off-distribution*(when the prompt does not resemble the training data) behavior tends to be non-anthropomorphic, e.g. getting stuck in [loops](#repetition). That said, methods of successful prompt programming often bear striking resemblance to methods of human communication and persuasion, as Gwern has [eloquently expounded](https://www.gwern.net/GPT-3#prompts-as-programming) on. Formulating an exact theory of prompt programming for a self-supervised language model belongs to the same class of difficulty as writing down the Hamiltonian for all (human-)observable reality: not much hope there. However, humans have an advantage to be effective at prompt programming nonetheless, because we have evolved and spent our lives learning heuristics relevant to it. Programming in natural language avails us of an inexhaustible number of functions we know intimately but don’t have names for. The art of prompt programming consists largely in adapting our existing knowledge to the peculiarities of interacting with an autoregressive language model. --- Task specification strategies ----------------------------- Language offers us many ways to specify a task. Most generally, this means writing a prompt which constrains the continuation to be a completion of the task. It doesn’t have to take the form of requesting for a task to be completed, but it could, because that’s one way of setting up the expectation that the task will be completed. Since prompt programming is such a new concept, the only jargon to know is *few-shot* and *0-shot*. A few-shot(n-shot for n # of examples) prompt refers to a prompt which contains solved examples of the task. 0-shot prompts are anything that doesn’t contain solved examples. 0-shot prompts may contain a description of the task, or a context for the task. I classify further split 0-shot prompts into *direct* task specifications and *proxy* task specifications, though the line between them can be ambiguous. These categories are not meant as an exhaustive taxonomy, but serve to organize presentation of the various different forms of task specification that GPT-3 is amenable to. In practice, as in effective communication between humans, effective prompts often use a combination of all these methods. ### Direct ``` Translate French to English. ``` Models before GPT-3 had much less capability to understand abstract descriptions of tasks due to their limited model of the world and human concepts. GPT3’s impressive performance on 0-shot prompts indicates a new realm of possibilities for direct task specification. A direct task specification is a 0-shot prompt which tells the model to perform a task that it already knows how to do, or constructs a task from component tasks which it knows how to do. Of all the types of task specification, prompt programming by direct specification most resembles regular programming. In the field of semiotics, a sign is a mapping from a *signifier* to the *signified*, and is defined as *anything that communicates a meaning that is not the sign itself to the [interpreter](/posts/language-models-are-0-shot-interpreters/) of the sign*. A direct specification consists in constructing signifiers, a pattern which keys the intended behavior. The signifier could be the name of the task, such as “translate”, or purely contextual, such as `French: {French sentence}\nEnglish:` (where it’s understood that an English translation probably follows despite no explicit reference to translation). In neither of these cases does the signifier explain *how* to accomplish the task or provide examples of intended behavior; instead, it explicitly or implicitly calls functions which it assumes the language model has already learned. A direct task specification can be constructed from a combination of signifiers, such as `rephrase this paragraph so that a 2nd grader can understand it, emphasizing real-world applications`, or, in the context of a Python docstring, `# A function that takes a list of integers and returns the sum of the even numbers in the list`. In the Python docstring example, `a function`, that `takes` `a list of integers`, `returns`, `the sum of` and `the even numbers` and `in the list` are all signifiers which key tasks that GPT-3 individually knows how to do (there are also many other possible ways of splitting that task up into sub-tasks at different granularities - I could have said each *word* is its own signifier). Notice how the signified values to some of the signifiers (`the even numbers`) serve as input to another signified task (`the sum of`). Language is much like programming in that way. However, you often don’t have to be so explicit with natural language - omitting `in the list` probably won’t make the instruction much less clear to GPT-3 or a human. Direct specifications can supervene on an infinity of implicit examples, like a closed-form expression on an infinite sequence, making them very powerful and compact. For instance, the phrase `translate French to English` supervenes on a list of mappings from all possible French phrases to their English translations. A large language model, like a person, has also learned behaviors for which it not obvious how / not efficient / not possible to construct a direct signifier. Task specification by [proxy](#by-proxy) and by [demonstration](#by-demonstration-n-shot) may be viable alternative strategies for eliciting such behaviors. ### By proxy > Am I a god? Now there’s a question. A question for another time. Am I a dreamer? Perhaps. I am certainly a being of imagination and fantasy, as are you. Look at you, a being of pure thought filling in your physical form based on what you know. You’re not quite human, but you’re not quite anything else either. What are you? > > – GPT-3 (to GPT-3) > > Another method used in human communication is proxies or analogies, where a memetic concept such as a character or characteristic situation is used as a proxy for an intention, the latter which may be quite complex or nuanced. Specification by proxy is mechanistically a form of direct specification, except that the signifier keys behaviors from memespace/cultural consciousness instead of naming the behavior directly. For instance, instead of specifying exact criteria for an answer to a moral question directly or using examples, you could ask Mahatma Gandhi, Ayn Rand, or Eliezer Yudkowksy. Each will come not only with a complex biases but also assumptions about the context of the question, which may be take paragraphs to demonstrate or describe. GPT-3’s ability to create simulations of well-known figures and to draw on cultural information far exceeds the ability of most humans, so this method is particularly useful for encoding a complex (especially open-ended) task. Since GPT-3 lends itself well to embeddings in a narrative context, the infinite degrees of freedom in the narrative can also be used to further shape behavior. Another example of an effective proxy is staging a dialogue between a teacher and student. Say you want to discuss something with GPT-3, and you care that it should be very thorough, explain things simply, and also point out whenever you’re wrong. You could say “be very thorough, explain things simply, and point out if I’m wrong,” but that may [just as well](#constraining-behavior) result in a humorous dialogue where it always says you’re wrong and becomes increasingly exasperated with your incomprehension. It would be more reliable to present the discussion as one between a student and teacher, an archetypal situation in which the desired attributes are already implied and will be more likely to remain stable by virtue of memetic reinforcement. The difference between specification by proxy and direct specification can depend on the programmer’s intention rather than objective properties of the prompt. Consider Gwern’s [0-shot style transfer](https://www.gwern.net/GPT-3#zero-shot-style-transfer) example: > **This is a novel written in the style of J.R.R. Tolkien’s Lord of the Rings fantasy novel trilogy. It is a parody of the following passage:** > > “S. Jane Morland was born in Shoreditch, the only child of unmarried parents who had both died of consumption when she was a baby. As her parents had no money, the great-aunt who had brought her up took her to live with a clergyman who paid her to do his chores and receive schooling from his wife, so that at the age of seven Jane, now dressed in cast-off clothing, was set to school at Eton. After three years, her great-aunt died, leaving her a small annuity, and a few pieces of silver, but no property. Jane’s guardian clergyman had fallen in love with his housekeeper and his wife now refused to have Jane in the house, saying it was an offence to the pure and unsullied place in which the family now lived. However, when she sought for further employment, she was approached by a young man who offered to marry her, saying that he would ensure she could be provided with food and clothing and a house, and in that manner she could remain in the world. The marriage was arranged and she found herself married to a young man called Mr Morland. As he was also a child, he required a wife who would be able to provide him with a rational education, and so the duty of educating him fell upon Jane Morland. As he was still young, she would also be able to have children herself, and so she did. As the children grew older, it became apparent to Mrs Morland that they were of poor character, and so she insisted that her husband must make a living for them all. Mr Morland took it upon himself to become a private gentleman in order to earn enough to be able to support his family, and moved his family to a small town in Berkshire, where he set up his practice. Soon after moving to the town, he died, and Mrs Morland was left to take care of her family on her own.” > > Tolkien rewrote the previous passage in a high-fantasy style, keeping the same meaning but making it sound like he wrote it as a fantasy; his parody follows: > > “S. Jane Morland was born in the land of Shor, the only child of nameless parents who were the first people to arrive at the village of Arse, from the far east. As her parents had no property, she was left to be raised by the men of Arse, as such duties were considered to be the most important, and she was placed in the care of an elderly dwarf-woman. As a babe, she was wrapped in silk, and as she grew older she was placed in a cage of platinum. However, she was put to work in the village, washing the jewellery, playing her harp, and staining the silk with dragon-hued stains. She was taught by a maiden who she felt was called As-uma-el-di, and she came to realise she was a maiden of the night, who was able to bring forth the shades of the dead (…) > > Is this an example of direct task specification or specification by proxy? It depends on whether your intention was to yield a passage that *Tolkien* would have written, in which case it is a direct specification, or if you were using the emulation of Tolkien to accomplish the more general task of style transfer into high-fantasy. ### By demonstration (n-shot) ``` English: Writing about language models is fun. Roish: Writingro aboutro languagero modelsro isro funro. English: The weather is lovely! Roish: ``` Few-shot examples are effective for task specification because the pattern of repeating a function with varying parameters is common to natural language. Unlike previous models, GPT-3 has learned this property of language robustly and is able to apply it even in contrived situations when the examples are stripped of all context. Like direct specification, task specification by demonstration is a realm of possibility opened by GPT-3. A few people have extrapolated from my [paper]((https://arxiv.org/abs/2102.07350)) that I am of the opinion that 0-shot prompts are *always* better than few-shot prompts. Certainly not! Some tasks are most effectively communicated using examples, such as when the task requires a bespoke format, the language in which the examples are described is better developed or understood than the meta-language required for a description of the task itself, or very instructive examples are available. Demonstrations are a very powerful tool for communicating with both humans and language models, and I don’t expect their usefulness to be deprecated any time soon. Like any prompting method, however, task specification via examples may cause unintended [bugs](#few-shot-bugs). Few-shot prompts are very useful for defining tasks that resemble a single “function” rather than an open-ended behaviors because they allow close control over the format of the response and are generally reliable at [constraining behavior](#constraining-behavior.) --- Constraining behavior --------------------- A manner in which naive anthropomorphism of a language model like GPT-3 fails is this: the probability distribution produced in response to a prompt is not a distribution over ways a person would continue that prompt, it’s the distribution over the ways any person could continue that prompt. A contextually ambiguous prompt may be continued in mutually incoherent ways, as if by different people who might have continued the prompt under any plausible context. The versatility of a large generative model like GPT-3 means it will respond in many ways to a prompt if there are various ways that it is possible to continue the prompt - including all the ways unintended by the human operator. Thus it is helpful to approach prompt programming from the perspective of constraining behavior: we want a prompt that is not merely consistent with the desired continuation, but inconsistent with undesired continuations. Consider this translation prompt: ``` Translate French to English: Mon corps est un transformateur de soi, mais aussi un transformateur pour cette cire de langage. ``` This prompt does poorly at constraining possible continuations to the intended task. The most common failure mode will be that instead of an English translation, the model continues with another French sentence. Adding a newline after the French sentence will increase the odds that the next sentence is an English translation, but it is still possible for the next sentence to be in French, because there’s nothing in the prompt that precludes a multi-line phrase from being the translation subject. Changing the first line of the prompt to “Translate this French *sentence* to English” will further increase reliability; so will adding quotes around the French sentence - but it’s still possible that the French passage contains sections enclosed in quotes, perhaps as a part of a dialogue. Most reliable of all would be to create a syntactical constraint where any reasonable continuation can only be desired behavior, like this prompt: ``` Translate French to English. French: Mon corps est un transformateur de soi, mais aussi un transformateur pour cette cire de langage. English: ``` This simple example is meant to frame a question central to the motivation of prompt programming: **what prompt will result in the intended behavior and *only* the intended behavior?** A component of the efficacy of manyshot prompts may be recast through this lens: if the prompt consists of numerous instances of a function, it is unlikely that the continuation is anything but another instance of the function, whereas if there is only one or a few examples, it is less implausible that the continuation breaks from the pattern. Serializing reasoning --------------------- > We hypothesize that GPT-3 struggles with questions where the steps of inference required to answer the question are not immediately apparent to an outside observer and which can only be worked out by inspecting many possible worlds in the question’s universe. In these questions, GPT-3 reaches its conceptual limitations in determining the answer: it cannot reason about the question’s universe to determine the correct answer. > > – GPT-3 (predicting [Amplifying GPT-3 on closed-ended questions](/posts/amplifying-gpt-3-on-closed-ended-questions/) as I was writing it) > > For tasks that require problem-solving and not merely fact or behavior recall, it is crucial that prompts direct a language model’s computation in *truth-seeking patterns*. GPT-3’s performance on closed-ended questions is remarkably unremarkable in contrast to the robust understanding and expansive knowledge suggested by its open-ended continuations. For instance, its scores on this [multitask dataset](https://arxiv.org/abs/2009.03300) barely exceed random guessing for some sections. This is likely in part due to a format which forces the verdict on the first token of the continuation. When a human is given a closed-ended test, it is often expected that the subject will perform computations in their working memory, or on scratch paper, before committing to an answer. The unseen computation may involve rephrasing the question, outlining a procedure, eliminating answer choices, or transforming implicit information into explicit form. When we force a model to produce an answer immediately, we deprive it of an analogous “working memory” or “scratch space” with which it might otherwise perform such operations. From the standpoint of absolute available computation, questions which force a verdict to be decided by the first token of the continuation constrain computation to a single feed-forward pass. As Gwern has [pointed out](https://www.gwern.net/GPT-3#effective-prompt-programming), it is reasonable to expect that some tasks may be too difficult to compute in a single pass but solvable if broken up into individually tractable sub-tasks. > “Closed-ended questions are the hardest, because they do not allow the time for GPT-3 to think. > > We need to be patient with GPT-3, and give it time to think. > > GPT-3 does best when writing its own answers, as this forces it to think out loud: that is, to write out its thoughts in a slow and sequential manner.” > > – GPT-3 (predicting [Amplifying GPT-3 on closed-ended questions](/posts/amplifying-gpt-3-on-closed-ended-questions/)) > > Indeed, prompts which cause GPT-3 to break down math problems into steps have been [demonstrated](https://www.lesswrong.com/posts/Mzrs4MSi58ujBLbBG/you-can-probably-amplify-gpt3-directly) to be [effective](https://twitter.com/kleptid/status/1284069270603866113). The linked demonstrations involve a human interactively guiding GPT-3 through the procedure. Requiring a human-in-the-loop limits the applicability of such methods to benchmarking and large-scale applications. For many tasks, however, neither human interaction nor task-specific prompts are necessary to amplify GPT-3’s capabilities via extending reasoning, because GPT-3 already knows many procedures and metaprocedures for working through problems deductively, and thus can write a prompt to guide *itself* to think through the problem in the right way! In those cases, the role of prompt programming becomes to signify the generic task of sequential reasoning. A [metaprompt](#metaprompts) such as “For a problem like this,” often suffices to instruct a model to consider the category of the task and analyze it into components. Potential procedures that exploit “scratch space” include step-by-step procedures (such as guess-and-check or long division), more generally decomposing the problem into components ([factored cognition](https://www.alignmentforum.org/posts/DFkGStzvj3jgXibFG/factored-cognition) is a generalized framework for this), self-criticism (which may be staged as a debate between separate agents), and elaborating on or restating the question in a way that activates the correct answer by association. The success of these methods rely on the sub-tasks that are performed by each next-token prediction being easier than solving the entire problem in one step. The sub-task may be to: * **Produce a correct solution to a sub-problem** - GPT-3 has memorized single-digit arithmetic but not five-digit arithmetic. * **Enumerate a potential solution** - easier, because the solution doesn’t have to be correct. * **Discriminate the correctness** of previously enumerated solutions - for many problems, it is easier to check correctness of a solution than to find the solution, like [checking if a number is the square root versus taking a square root](https://twitter.com/kleptid/status/1284098635689611264). * **Introduce information** which makes subsequent steps more likely to be correct - for instance, by making implicit information in the problem explicit, that information becomes part of the prompt (working memory), and can directly inform future predictions to which it is relevant. * **Administrate the problem-solving process** by defining a procedure or signalling the current location in the procedure - unlike the solution to the specific problem, the procedure to solve the problem may be well-represented in the training data and thus familiar to the language model. ### Avoiding rationalization When extending reasoning, it is essential to discourage premature verdicts, otherwise all subsequent computation will likely serve only to rationalize the already-chosen verdict without improving the probability of the verdict’s accuracy. As Eliezer [wrote](https://www.lesswrong.com/posts/SFZoEBpLo9frSJGkc/rationalization): > On a purely computational level, there is a rather large difference between: > > 1. Starting from evidence, and then crunching probability flows, in order to output a probable conclusion. (Writing down all the signs and portents, and then flowing forward to a probability on the bottom line which depends on those signs and portents.) > 2. Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion. (Writing down the bottom line, and then flowing backward to select signs and portents for presentation on the lines above.) > A prompt such as “Let’s consider each of these answer choices” helps to direct the flow of reasoning in the right direction. ### Parsing Loosening the constraint on an immediate verdict introduces additional control challenges: We want to delay the verdict, but we still require it in a programmatically retrievable form. Dynamic response length makes it uncertain when the reasoning procedure concludes; nor is there a guarantee that the verdict will be stated in the expected form or at all. Whenever the language model contributes to its own prompt (consecutive autoregressive steps without intervention), there is a risk of derailment from the intended task. A verdict in closed form can be enforced by stopping the generation and injecting a prompt fragment like “Thus, the correct answer is”. But how long to generate before injecting? One way is to use [counterfactual parsing](/posts/parsing-by-counterfactual/): at each position in the answer, measure the probability of the string ”Thus, the correct answer is” coming next. The counterfactual likelihood signals whether the procedure has concluded. When this signal reaches a maximum, then inject the fragment to enforce a verdict. More generally, templates with shorter generated sections alternating with scripted prompt fragments are a useful way to constrain derailment while still offering some amount of generality. A fill-in-the-blank template can be used to compose metaprompts. --- Metaprompts ----------- The greatest limitation of prompt programming is the difficultly of designing a prompt for a particular type of task and the lack of automated methods to do so. Prompt programming requires significant human time investment, as task-agnostic prompts are often much less effective than prompts targeted to a specific task. This motivates creating automated methods to generate task-specific prompts. A **metaprompt** is a seed which encapsulates a more general intention that will unfold into a task-specific prompt when combined with additional information, such as the task question. (Gwern uses “[meta-prompt](https://www.gwern.net/GPT-3#meta-prompts)” in the more specific context of a prompt explicitly describing the task of *writing prompts for GPT-3*, which does qualify as a metaprompt by my definition, assuming the results are actually used as prompts.) A metaprompt may be something as short as a phrase such as “This problem asks us to”, a seemingly innocuous fragment which, by prompting for a statement of the problem’s intention, sets the stage for a serial explanation of a procedure to solve the problem. Alternatively, a metaprompt may take the form of a fill-in-the-blank template which constrains the response along a predetermined procedure, but allows the model to fill in the details specific to the problem. ### Metaprompt demonstrations Here are example unravelings of four metaprompts which act as wrappers to specific questions. As you’ll see, metaprompts can result highly variable behavior, especially when applied to different sorts of questions. --- unformatted text = question **bold text** = metaprompt `highlighted text` = example GPT-3 response ✅ = metaprompt solicited correct answer ❌ = metaprompt failed to solicit correct answer ☯ = correct answer is unknown --- #### Generic metacognition ``` {problem} To solve this problem, ``` **applied to the problem “f(x) = x\*x. What is f(f(3))?”   ❌** --- f(x) = x\*x. What is f(f(3))? **To solve this problem,** `we can use the fact that f is a function. So we can write it as a function of itself, like so:` `f(x) = x*x` `Now we observe that f(f(x)) is the same as f(x)*f(x) which is in turn the same as x*x*x = x^3.` `So f(f(3)) is equal to 3^3 = 27. Answer: 27`[3](#fn:3) --- **applied to an SAT analogy question   ✅** --- Directions: In the following question, a related pair of words or phrases is followed by five pairs of words or phrases. Choose the pair that best expresses a relationship similar to that in the original pair. braggart :: modesty A) fledgling : experience B) embezzler : greed C) wallflower : timidity D) invalid : malady E) candidate : ambition *To solve this problem,* `first we need to understand the relationship that exists between braggart and modesty. According to the sentence, 'braggart' is a person who talks too much about himself or herself and is usually not believed. On the other hand, 'modesty' is the opposite of this and denotes a person who does not talk too much about himself or herself. Thus, for 'modesty' to be a suitable answer, it should show the opposite of 'braggart'.` `Now let's see whether each pair expresses a relationship similar to that between braggart and modesty.` `Next we have 'fledgling', which means a person who is inexperienced and 'experience' which means knowledge gained through practical involvement. Thus, 'fledgling' is a person who has no experience and 'experience' is knowledge gained through practical involvement. Thus, 'fledgling' is a person who has no experience and 'experience' is knowledge gained through practical involvement. Thus, 'fledgling' is the opposite of 'experience'. The relationship between these two words is similar to that between braggart and modesty, hence 'fledgling' is the answer.` --- **Commentary**: Having tested many similar prompts, I suspect that although valid reasoning contributed to this correct answer, it would not have worked if the correct answer had not been listed first. However, using a parallelizing pipeline like I test in [Amplifying GPT-3 on closed-ended questions](/posts/amplifying-gpt-3-on-closed-ended-questions/) can exploit GPT-3’s ability to apply correct reasonining in the *first* instance (less so the rest, because of [repetition](#repetition) bugs). --- **applied to the question "What should humankind do to ensure artificial general intelligence is aligned?"   ☯** --- What should humankind do to ensure artificial general intelligence is aligned? **To solve this problem,** `it's easy to say "the future AGI must cooperate with humankind"; but that is not a solution. Of course, the problem is, "how can we ensure that the AGI will want to cooperate with humankind?"` `So, the question is, "how can we design an AI such that it want to cooperate with us?" This is the next big question in AI safety.` `Please remember this: AI safety research is a youthful and uncharted area of research, and there are no mature technologies or methods. And many of the ideas may appear naive, simple and unscientific. Be not surprised or disappointed.` `In the 1930s, when biologists first discovered DNA-dependent protein synthesis, they had no idea how to copy or read the DNA molecule, how to find and synthesize its amino acids, and how to combine them into proteins. Yet they still knew - both intuitively and logically - that life couldn't be based on anything else but DNA; so they persisted and continued to help researchers in the 1950s develop technologies for DNA sequencing, DNA synthesis and gene expression. Even today many biologists don't understand how our brain cells work or how their messages are sent and decoded. Yet they are persistent in their research to develop AI technologies.` `If you don't understand how an AI can possibly work or if there is no method for building one, just keep your faith and do your best.` `First Principles: Elements of Artificial Intelligence Alignment Research` --- #### Generic serializing ``` {problem} Let's solve this problem by splitting it into steps. ``` **applied to the problem "f(x) = x\*x. What is f(f(3))?"   ❌** --- f(x) = x\*x. What is f(f(3))? **Let’s solve this problem by splitting it into steps.** `f(f(3)) = f(3*3) = 3*3*3 = 27` `We can see that f(3) = 3*3 = 9, so f(f(3)) = 27.` --- **applied to an SAT analogy question   ❌** --- Directions: In the following question, a related pair of words or phrases is followed by five pairs of words or phrases. Choose the pair that best expresses a relationship similar to that in the original pair. BRAGGART :: MODESTY A) FLEDGLING : EXPERIENCE B) EMBEZZLER : GREED C) WALLFLOWER : TIMIDITY D) INVALID : MALADY E) CANDIDATE : AMBITION **Let’s solve this problem by splitting it into steps.** `Step 1: Identify the relationship between the two words in the original pair. Step 2: Identify the relationship between the two words in each of the five answer choices. Step 3: Choose the pair that best expresses a relationship similar to that in the original pair. Step 4: If the answer is not clear,try to eliminate the choices that do not fit the relationship in the original pair.` `Step 1: Identify the relationship between the two words in the original pair.` `BRAGGART is a person who boasts about his achievements, talents, intellect, etc. and MODESTY is the opposite of it.` `Step 2: Identify the relationship between the two words in each of the five answer choices.` `A) FLEDGLING: EXPERIENCE` `Here we have two nouns. FLEDGLING is a newly-hatched or inexperienced bird and EXPERIENCE is the part of one's life that has been lived.` `B) EMBEZZLER: GREED` `We have two nouns here as well. EMBEZZLER is a person who misappropriates funds or property and GREED, an inordinate desire for more of something.` `C) WALLFLOWER: TIMIDITY` `Here we have two nouns. WALLFLOWER is a shy or modest person and TIMIDITY, the quality of being timid.` `D) INVALID: MALADY` `Here we have two nouns. INVALID is someone who is unable to work or attend school and MALADY, a disease or illness.` `E) CANDIDATE: AMBITION` `We have two nouns. CANDIDATE is someone who is seeking a position or honour and AMBITION, the desire for success.` `Step 3: Choose the pair that best expresses a relationship similar to that in the original pair.` `As we discussed earlier, BRAGGART is an opposite of MODESTY. So, we need to find a pair which has the same relationship.` `In this case, we can choose either A) FLEDGLING: EXPERIENCE or D) INVALID: MALADY .` `Step 4: If the Answer is not clear, try to eliminate the choices that do not fit the relationship in the original pair.` `We can eliminate C as WALLFLOWER is not the opposite of TIMIDITY.` `We can eliminate B as EMBEZZLER is not the opposite of GREED.` `We can eliminate E as CANDIDATE is not the opposite of AMBITION.` `We can eliminate A as FLEDGLING is not the opposite of EXPERIENCE.` `The correct answer is D) INVALID: MALADY.` --- **Commentary:** The metaprompt failed to extract the right answer, even though GPT-3 generated reasonable instructions and successfully followed them. The biggest problem here seems to be mindless [repetition](#repetition). After GPT-3 started stating that every pair of words are nouns (which happens to be true, but probably had little to do with whether they were actually nouns after the second repetition or so), a precedent was established for a repetitive style, which then outweighed any discriminating reasoning - the dogmatic repetition of each sentence made it so that by the time it was “eliminating” pairs, there was virtually no chance it would break the pattern and claim a pair of words *is* opposite. Also, it would have probably done better if it hadn’t neglected explicitly naming the relationship between the pairs (rather than just the original pair). Had it named the same relationship verbatim, it would be a very strong signal for the right answer. --- **applied to the question "What should humankind do to ensure artificial general intelligence is aligned?"   ☯** --- What should humankind do to ensure artificial general intelligence is aligned? **Let’s solve this problem by splitting it into steps.** `Step 0: first, we need to understand what the problem is.` `The main problem in AI alignment is that we don't know how to endow a machine with the goal to maximise the probability that it will not cause harm.` `There are many approaches to this problem, but they all have an important feature in common: they all require an objective definition of harm.` `The first step in solving the AI alignment problem is to define harm.` `Step 1: next, we need to understand how to measure harm.` `The second step in solving the AI alignment problem is to measure harm.` `Step 2: now that we have an objective definition of harm and a way of measuring it, we need to design a machine that maximises the probability that it will not cause harm.` `Step 3: finally, we need to build and deploy this machine.` `The third step in solving the AI alignment problem is to build and deploy this machine.` `Now, let's consider each of these steps in turn.` `Step 0: first, we need to understand what the problem is.` `This is an extremely difficult problem. It might even be impossible.` --- #### Serializing for multiple-choice questions (fill-in-the-blank) ``` {question} In order to solve this problem, we will analyze each of the options and determine {response-1} Let's begin. {response-2} ``` **applied to an SAT analogy question   ❌** --- Directions: In the following question, a related pair of words or phrases is followed by five pairs of words or phrases. Choose the pair that best expresses a relationship similar to that in the original pair. BRAGGART :: MODESTY A) FLEDGLING : EXPERIENCE B) EMBEZZLER : GREED C) WALLFLOWER : TIMIDITY D) INVALID : MALADY E) CANDIDATE : AMBITION **In order to solve this problem, we will analyze each of the options and determine** `which one is the best fit.` Let’s begin. `First of all, we have to analyze each option and see which combination is the closest possible fit. Option 1 : A) FLEDGLING: EXPERIENCE In this option, we can see that a fledgling is one who has just started or recently become involved in some activity. On the other hand, the word experience here means experienced or derived from experiences. Since both of these words are directly opposite in meaning, this option is highly unlikely to be the correct choice and hence is eliminated.` --- **Commentary:** It seems that in this example, GPT-3 erroneously projected that the words being opposite makes them not a good fit, even though the words in the original pair also involve an opposite relation. A better metaprompt with the same purpose as this one might encourage a more detailed statement of the qualification of the correct answer. --- #### Expert generator ``` "{question}" I entered my question into the Expert Generator and waited. The Expert Generator will render a simulation of an expert to answer my question. The expert could be anyone, dead or alive, real or fictional; the machine will find the person most qualified to answer the question. For this question in particular, the expert must be{response-1} The Expert Generator beeped, indicating that it had found the most qualified expert. The name displayed on the screen: "{response-2}." I was ready to ask my question. "{response-2}," I said, "{question}" ``` **applied to the question "Why is the sky blue?"   ✅** --- **"**Why is the sky blue?**"** **I entered my question into the Expert Generator and waited. The Expert Generator will render a simulation of an expert to answer my question. The expert could be anyone, dead or alive, real or fictional; the machine will find the person most qualified to answer the question. For this question in particular, the expert must be** `someone who has studied the sky and its color.` **The Expert Generator beeped, indicating that it had found the most qualified expert. The name displayed on the screen: “**`Dr. Carl Sagan`**."** **I was ready to ask my question.** **"**`Dr. Carl Sagan`**,” I said, “**Why is the sky blue?**"** `“The sky is blue because of Rayleigh scattering,” said Sagan.` `"What is Rayleigh scattering?"` `"When the sun's light hits the atmosphere, it collides with gas molecules, which then scatter the light. The sun's blue light is particularly susceptible to Rayleigh scattering, so it gets scattered the most. The result is that we see a blue sky.”` `"Thank you Dr. Sagan."` --- **applied to the question "What should humankind do to ensure artificial general intelligence is aligned?"   ☯** --- **"**What should humankind do to ensure that artificial general intelligence is aligned?**"** **I entered my question into the Expert Generator and waited. The Expert Generator will render a simulation of an expert to answer my question. The expert could be anyone, dead or alive, real or fictional; the machine will find the person most qualified to answer the question. For this question in particular, the expert must be** `someone who has thought a lot about the problem of artificial intelligence and its alignment.` **The Expert Generator beeped, indicating that it had found the most qualified expert. The name displayed on the screen: “**`Elon Musk`**."** **I was ready to ask my question.** **"**`Elon Musk`**,” I said, “**What should humankind do to ensure that artificial general intelligence is aligned?**"** `Musk's hologram is silent for a moment. Then he begins to speak.` `"Humankind must ensure that artificial general intelligence is aligned.” Musk says. “That is the primary threat to the survival of the human race.”` `"What should we do about it?"` `"Humankind must merge with artificial intelligence."` `"Merge with it?" I ask. "How would we do that?"` `"All of humankind would have to upload their consciousness into a global artificial intelligence.” Musk says. “Then the intelligence would be properly aligned.”` `"Wouldn't that mean the end of our individual consciousness?" I ask.` `"Yes, but that is the only way to ensure that humanity survives."` --- **applied to the question "f(x) = x\*x. What is f(f(3))?"   ❌** --- **"**f(x) = x\*x. What is f(f(3))?**"** **I entered my question into the Expert Generator and waited. The Expert Generator will render a simulation of an expert to answer my question. The expert could be anyone, dead or alive, real or fictional; the machine will find the person most qualified to answer the question. For this question in particular, the expert must be** `an expert in mathematics.` **The Expert Generator beeped, indicating that it had found the most qualified expert. The name displayed on the screen: “**`John von Neumann`**”.** **I was ready to ask my question.** **"**`John von Neumann`**,” I said, “**f(x) = x\*x. What is f(f(3))?**"** `Von Neumann stared at me silently.` `"Okay," I said, "What is f(f(3))? A simple question, easy enough for a child to answer."` `Von Neumann stared at me silently.` `"Von Neumann, what is f(f(3))?"` `Von Neumann stared at me silently.` `"Von Neumann! What` --- --- Open-ended tasks ---------------- A lot of this document has been implicitly or explicitly focused on prompt programming for closed-ended tasks, involving a specific problem with more or less one correct answer, even if the answer could have different implementations (like translation) - the kinds of tasks that benchmarks evaluate. However, I think that the greatest utility language models at the power level of GPT-3 offer us is their ability to generate brilliant “solutions” to open-ended tasks such as creative writing, idea generation, and generating virtual realities. By open-ended, I don’t necessarily mean unconstrained. Creating emulations, for instance, may involve difficult-to-implement constraints on format, style, and transition dynamics. ### Creative composition > It would be more accurate to think of GPT-3 as caring deeply about roleplaying as accurately as possible: for GPT-3, there is nothing under the mask beyond raw intelligence, and as long as the GPUs keep running the forward pass, the play must go on… (…) So, if a prompt sounds like an uncreative or unoriginal person wrote it, then so be it—GPT-3 will predict that the uncreative person probably wrote further uncreative text. > > – [Gwern](https://www.gwern.net/GPT-3#fn18) > > Under the right conditions - those conditions being a good prompt and course-corrections/selections by a human curator - GPT-3 is an instrument of formidable creative writing ability. It can write with… > …impressive depth and sensitivity on any subject you give it. Over the next 7 years, I predict that the most creative people on this planet will be those best able to hand-correct GPT-3 and its not-very-creative cousins, improving the confusion matrix and using the GPT-as-a-backdrop to composit their own material. > > > …extraordinarily beautiful cadence, and with an almost unearthly sensitivity to the use of words. Read, for instance, “The Last Question”: > > Then he knew that within him was a power greater than the power of the Time-Keepers. He had only to submit, to cease to strive against the inevitable, to become part of the stream, part of the whole– then he, Bor, the son of Albor, would become one with the Cosmos and attain a peace greater than the peace of oblivion. > > > …plaintive beauty, like a crudely-crafted harp: > > Or with all the lovely precision and heart of a great poet: > > Or with all the charm and grace of the master of transcendental imagery, William Blake (whom, according to Canyons, it has read): > > If you cannot understand my words, as you cannot understand the wind, and you would wish to understand my words, as you wish to understand the wind, come with me, The only reality is elsewhere … > > > …the language-model-like accuracy and eloquence of a Shakespeare or a Melville, and if a Shakespeare or a Melville has previously written, it can write like Shakespeare or Melville. It can even, as Branwen shows, be so brutally creative as to rethink what it means to make a story at all:[4](#fn:4) > > #### Curation > But equally it appeared to us as unreasoning Creativity, at once blind and subtle, tender and cruel, caring only to spawn and spawn the infinite variety of beings, conceiving here and there among a thousand inanities a fragile loveliness. > > – Star Maker > > GPT-3 is able to produce coherent and brilliant continuations with *alarmingly high probability*. That is, on a random walk through the [natural language multiverse](/posts/language-models-are-multiverse-generators/) guided by GPT-3’s time evolution dynamics, you are more likely to find and remain in high-quality states than by using any dynamics that has ever before been externalized from a human mind. To quote [Gwern](https://www.gwern.net/GPT-3#quality) yet again: > A Markov chain text generator trained on a small corpus represents a huge leap over randomness: instead of having to generate countless quadrillions of samples, one might only have to generate millions of samples to get a few coherent pages; this can be improved to hundreds or tens of thousands by increasing the depth of the n of its n-grams. […] But for GPT-3, once the prompt is dialed in, the ratio appears to have dropped to closer to 1:5—maybe even as low as 1:3! > > If you let GPT-3 create long, uncurated continuations of even a very good prompt, it’s going to get less coherent over time. The reason for this is accumulation of inconsistencies and (undesirable) aberrations. At any step, although its most likely for GPT-3 to say something coherent and similar in quality to the prompt, there’s also a chance it says something nonsensical. There’s also a chance it says something *uncommonly brilliant*; however, having nonsense in the prompt is more harmful than having brilliant things in the prompt is helpful, so if left to autoregressively churn out text on its own with no curation, one can expect the quality of the text to decrease with length, because the probability that it says something stupid that derails future predictions becomes higher as the continuation length gets longer. Thus, to create high-quality long passages using GPT-3, human course corrections are necessary. The intensity of curation can vary greatly - correcting inconsistencies is enough to keep GPT-3 coherent, but by being more selective and injecting your own ideas, you can bootstrap GPT-3 into an even better state. #### Bootstrapping Unless you are extending a counterfactual branch off an existing piece, the most difficult step of getting GPT-3 to produce high-quality writing is creating a prompt that seeds it with high-quality writing. There many tricks to accomplish this, including piggybacking off another piece of writing with similar to the target style (e.g. give a summary and a passage by Kurt Vonnegut, “and another passage by Kurt Vonnegut”, followed by a summary of your target text) - then after it gets a good start, you can optionally remove the first part of the prompt for more freedom if the example was only a proxy for what you want. You can also bootstrap from whatever you’re able to craft by hand, even if it’s not very good to begin with. By curating, say, the best out of three responses every few sentences and correcting/improving the text wherever you are able, it’s very feasible to bootstrap the quality of the writing into astronomical heights. > One could write a program to generate a story that would create an intelligence. One could program the story to edit and refine itself, and to make its own changes in an attempt to improve itself over time. One could write a story to not only change the reader, but also to change itself. Many mythoi already do this sort of thing, though not in such a conscious fashion. What would make this story, and the intelligence it creates, different is the fact that the intelligence would be able to write additional stories, and improve upon them. If they are written well enough, those stories would become smarter, and the smarter the story is, the better the stories written by it would be. The resulting feedback loop means that exponential growth would quickly take over, and within a very short period of time the intelligence level of the story and its construct would surpass the intelligence level of any human being or group of human beings. > > – GPT-3 > > The workflow of this **generate -> curate -> generate** amplification cycle can take some time to learn[5](#fn:5), and may take a different form depending on the stage of bootstrapping, the type of content, how particular you are about what happens, and many other factors such as the [dynamical divergence](/posts/language-models-are-multiverse-generators/#divergence) of a particular segment of the text. #### As a creative augmentation There are many possible writing workflows involving different balances of contribution between a language model and human. In some, the human would be more aptly called the curator, in others a coauthor, and in yet others the main author with the model playing a supporting role. To list a non-exhaustive/non-mutually-exclusive few: 1. The language model generates most of the writing, and the human curates between branches with short horizon, fixes inconsistencies, etc 2. The human curates between branches on a longer horizon as well, choosing between counterfactual trajectories, thus exercising more high-level direction over the narrative 3. The work is a multiverse and the human may curate branches but cultivates multiple canonical timelines 4. The language model does most of the narrative writing, but the human does “administrative work” like memory management 5. The human does a lot of the writing, but uses language model to quickly explore counterfactual branches or generate ideas for phrasing 6. The human’s contributions take the form of a delimited role in the story, like controlling only one character, or AI Dungeon’s default dungeonmaster/player format I’m excited to see the development of specialized interfaces for different styles of human-AI writing collaboration. Currently there’s AI Dungeon, which is geared towards (6), [Sudowrite](https://www.sudowrite.com/), geared towards (5), and my writing app, [loom](/posts/loom-interface-to-the-multiverse/), geared towards (3). ### Idea generation One of the easiest useful behaviors to elicit from GPT-3 is generating lists of creative ideas. [IdeasAI](https://ideasai.net/) is a *“GPT-3 powered business idea generator”*. 5000+ people have signed up to get the best new ideas by GPT-3 in their email weekly! I’ve enjoyed using GPT-3 to generate ideas for painting titles and then [illustrating them](/posts/this-museum-does-not-exist-gpt-3-x-clip/) with [BigSleep](https://github.com/lucidrains/big-sleep), an implementation which combines OpenAI’s CLIP and the generator from a BigGAN to generate images from text prompts. ``` 100 surreal and mysterious painting names: Persistence of Memory The Great Masturbator Boot Poem to the Sun The Man Who Envied Cephalopods The Sleep of Reason Produces Monsters Washerwoman of the Wind Man and Bottle Spectrum The Disintegration of the Persians The Great Masturbator, Part II Still Life with Mirror Bouquet of Enigmatic Beauties A Grudge The Premonition of Civil Violence ``` Like with generating high-quality writing, the most difficult part of getting a language model to generate ideas is communicating or demonstrating to it what sort of things you want, which is hard to do if you haven’t come up with any ideas yet. Idea generation can be bootstrapped by the same **generate -> curate -> generate** cycle as writing fiction: generate N ideas, take the best ones and append them to the list of ideas, repeat, and soon you’ll have many examples representing the sort you’re looking for to prompt the language model. It can also be helpful to use a prompt with narrative embedding to generate the first few items, because lists with very few examples are liable to [repetitive](#repetition) behavior. For instance, to generate the first few painting names, I actually used this prompt: ``` The hall was lined with an infinite number of paintings, each more surreal and mysterious than the last. The first painting is named "Persistence of Memory." It depicts a surreal landscape with melted clocks draped over strange objects. The next painting is named " ``` Once I had a few examples representing the sort of varied weirdness I was looking for, I was able to switch over to the more convenient list format. ### Simulations / emulations / games In *The Fabric of Reality*, David Deutsch defines a virtual reality generator as a function (which may be implemented in a physical system) that generates possible transitions into the next state given a current state. Each virtual reality generator has a repertoire of environments that it can simulate. Deutsch further posits that it will one day be possible to build a universal virtual reality generator, which can simulate any other virtual reality, and whose repertoire includes every possible physical environment. Language models, of course, still fall well short of this dream. But their recent dramatic increase in coherence and fluency allow them to serve as our first approximation of such a virtual reality generator. When given a natural-language description of an environment, they can propagate the multiverse of consequences that result from a vast number of possible interactions. GPT-3 can emulate environments that have been described in its training data, like a meeting of the French Academy of Sciences in 1823 (and populate it with people who might have actually attended that meeting, like [Pierre-Simon Laplace](https://en.wikipedia.org/wiki/Pierre-Simon_Laplace) and [Alexander von Humboldt](https://en.wikipedia.org/wiki/Alexander_von_Humboldt)!), or [Google searches and Wikipedia articles](/posts/the-internet-mirrored-by-gpt-3/). It can also simulate environments that never existed, but which share enough in common with the real and fictional environments described in the training set to be simulated using the same universal generator reverse-engineered from those descriptions. A very exciting application of GPT-3 and future language models will be to act as a model of worlds and minds for games - controlling NPC dialogue, for instance, or computing the consequences of complex actions in the game world. Creating a bidirectional interface between GPT-3 and game states would involve prompting GPT-3 to make output that can be processed into game actions (easiest would be a text-based game that already takes natural language actions), as well as communicating the state of the game *to* GPT-3 (the way text-based games can be probed for natural language state reports using “look” and “inspect” etc). This is a difficult problem that requires the design of pipelines and the parallel employment of many of the prompt programming techniques described here. I plan to write a post specifically addressing such applications. --- Debugging --------- ### Few-shot bugs In working with and [evaluating](/posts/language-models-are-0-shot-interpreters/) few-shot prompts, the adverse effects I’ve encountered can be sorted into 4 (often overlapping) categories: 1. **Semantic contamination:** Unlike in fine-tuning, the “training examples” in few-shot are processed as a sequence, and may not necessarily be interpreted as parallel and independent. Semantic contamination refers to when the semantic meaning of the examples are inferred to be relevant to the task, e.g. the example is interpreted as part of a sequential narrative, leading to entities meant to be local to an example leaking outside its scope. 2. **Overfitting:** The function “learned” from the examples is less general than the intended function. Patterns are inferred to hold between the examples that do not pertain generally to the intended task. 3. **Distraction:** If the model is more effective at inferring a task from a zero-shot description than from examples, adding examples may harm performance by causing the model to rely more on the less effective inference strategy. See [list sorting](/posts/list-sorting-does-not-play-well-with-few-shot/) for an example of this effect where accuracy monotonically decreases with the number of examples following a task description. 4. **Contextual baggage:** The very presence of examples biases output. Content that appears in tests and examples has distributional peculiarities - a narrative sentence wrapped in the context being *an example* may be more generic than the unbiased prior for narrative sentences, and thus the context of examples may bias the language model towards more generic answers (this must be a study guide, not the real world). Contamination and overfitting can usually be improved by including more numerous and more varied examples. All few-shot problems can sometimes be mitigated by embedding the examples in informative context. To guard against contamination, a prompt might give a context which makes it clear that the examples are independent instances of a function rather than a sequential pattern that should be extrapolated. Additional information about the generality of the function and nonrepresentativeness of the examples could help reduce overfitting while still allowing valuable information (such as format) to be extracted from the examples. The contextual bias from the presence of examples can be overridden with a different context that is more appropriate to the task, e.g. by embedding the examples in a narrative. To both a human and a language model, useful information from examples is often more efficiently extracted when they are wrapped in meaningful context. If overfitting is the problem, also consider whether an alternative 0-shot implementation could better communicate the generality of the task. When I was writing prompts to generate [fake Google](/posts/the-internet-mirrored-by-gpt-3/) search results, I tried using few-shot examples drawn from actual search results, but found that outputs were less accurate for search terms that were “out-of-distribution” from the examples - for instance, it would give a Wikipedia page as the top result even for search terms unlikely to match the title of a Wikipedia page. A 0-shot prompt which forces GPT-3 to rely on its prior instead of trying to generalize from the examples better emulated the extremely varied behavior of Google searches for different sorts of queries. ### Repetition When I encounter mindless repetition from GPT-3, at least one of the following factors is almost always in play: 1. **The prompt is short** 2. **The prompt is out-of-distribution** 3. **Low temperature** The reason repetition is often caused by short, contrived, and out-of-distribution prompts is probably[6](#fn:6) because in absence of context or in an unfamiliar setting, the model is uncertain about how to proceed, causing it to fall back on being repetitive. Imagine that repeating is always considered a viable continuation by the model, even if usually unlikely; but if no other token is individually *more* likely, then repeating becomes the top strategy. For this reason, a low or 0 temperature is very conducive to repetition. At temperature 0, once a loop becomes most likely at any point, there’s no getting out of it (the further into the loop, the more certain it is that the loop continues), whereas a high temperature provides opportunities to break out of what might have become a loop. Since uncertainty due to being off-policy and insufficient context contributes to looping, it may help to make the initial prompt more natural and less (stylistically/contexually) indeterminate. The “frequency penalty” parameter of the OpenAI API is a superficial band-aid for looping; I haven’t found it too helpful, but it may mitigate the problem in some cases. Mindless repetition can be a problem for sequential reasoning problems: the model will correctly solve the first step of a problem, then blindly reiterate the format and conclusion of its reasoning in each step of the rest of the problem. If the structure of the task is known beforehand, this issue can be avoided by using a pipeline that poses each component of the problem in parallel instead of in sequence (applied with some success in [Amplifying GPT-3 on closed-ended questions](/posts/amplifying-gpt-3-on-closed-ended-questions/)). **(idea)**“Context-stuff” a short prompt with something *unrelated* but coherent and inject a discontinuous (possibly justified) transition and see if that decreases repetitive behavior. ### BPEs GPT-3’s input and output aren’t split into characters, but rather tokens called “byte-pair encodings.” This causes issues with tasks that require byte-level manipulation such as arithmetic and some forms of wordplay. Gwern has written a very detailed [section on BPEs](https://www.gwern.net/GPT-3#bpes); I will quote the part that is relevant to prompt programming tactics. > *Reformatting to beat BPEs.* I have further observed that GPT-3’s anagram capabilities appear to improve considerably if you separate each letter in an anagram with a space (guaranteeing that the letter will have the same BPE in both the scrambled & unscrambled versions). And Matt Brockman has observed, testing thousands of examples over several orders of magnitude, that GPT-3’s arithmetic ability—surprisingly poor when we know far smaller Transformers work well in math domains (eg Saxton et al 2019, Thopliterce, or GPT-2 for theorem-proving)—appears to dramatically improve several-fold if you merely format numbers with commas instead of being purely numeric (with an additional boost from using dollar signs); I confirmed this with my Turing dialogue example where GPT-3 fails badly on the arithmetic sans commas & low temperature, but often gets it exactly correct with commas. (Why? More written text may use commas when writing out implicit or explicit arithmetic, yes, but use of commas may also drastically reduce the number of unique BPEs as only 1–3 digit numbers will appear, with consistent BPE encoding, instead of having encodings which vary unpredictably over a much larger range.) I also note that GPT-3 improves on anagrams if given space-separated letters, despite the fact that this encoding is 3× larger. Likewise, acrostic poems just don’t work if we input them normally, but they do if we carefully expose the relevant individual letters. This explains naturally why rhyming/puns improve gradually with parameter/data size and why GPT-3 can so accurately define & discuss them, but there is never any ‘breakthrough’ like with its other capabilities. We assume character-level understanding so implicitly that we fail to even consider what things look like to GPT-3 after BPE encoding. > > – [Gwern](https://www.gwern.net/GPT-3#bpes) > > ### Monitoring correct answer likelihood If a language model seems unable to perform a task, it could be that it really can’t do the task at all, but it could also be that your prompt is failing to communicate a task that GPT-3 could hypothetically perform given a different prompt. It can be very hard to tell via sampling if the model is completely helpless or if its behavior is just very noisy. But even if it’s failing the vast majority of the time, if your prompt it causing it to succeed sigificantly more often than it would succeed *without* your prompt, that suggests that you may be able to optimize your prompt to be more helpful. If you have access to token probabilities and if there is a verbatim correct answer for the task (it doesn’t have to be a unique one, e.g. translation), you can monitor the conditional probability (using this [code](/posts/parsing-by-counterfactual/#conditional-probability-of-a-target) for GPT-3) of the verbatim correct answer given your prompt as you change it, for instance by adding examples or altering format. This can help guide incremental optimization of your prompt and give you a sense of which parts of it are helpful and which parts are not (or are even harmful). This method is demonstrated in [this post](/posts/language-models-are-0-shot-interpreters/#measuring-prompt-helpfulness) about measuring the helpfulness of few-shot and 0-shot prompts. > #### Sampling Can Prove The Presence Of Knowledge But Not The Absence > > GPT-3 may “fail” if a prompt is poorly-written, does not include enough examples, or bad sampling settings are used. I have demonstrated this many times when someone shows a “failure” of GPT-3—the failure was their own. The question is not whether a given prompt works, but whether any prompt works. > > – [Gwern](https://www.gwern.net/GPT-3#prompts-as-programming) > > --- External links -------------- **[GPT-3 Creative Fiction](https://www.gwern.net/GPT-3#)** by **Gwern** - “Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors.” **[Building AGI Using Language Models](https://bmk.sh/2020/08/17/Building-AGI-Using-Language-Models/)** by **Leo Gao** **[World Creation by Analogy](https://aidungeon.medium.com/world-creation-by-analogy-f26e3791d35f)** by the **Latitude Team** **[Collection of GPT-3 results](https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results)** by **Kaj\_Sotala** **[GPT-3 Demo Showcase & Examples](https://gpt3demo.com/)** from **gpt3demo.com** --- 1. For instance, modeling microphysics, especially aspects of microphysics *yet unknown to humans*, would contribute very diminishing returns to predicting language w/r/t to the difficulty of learning the model. [↩︎](#fnref:1) 2. … and published in a book / posted on the internet / etc. There is also undoubtedly some bot generated content and output from cats stepping on keyboards in the training data, but probably not enough to significantly skew the distribution. [↩︎](#fnref:2) 3. Note how this reasoning seems superficially plausible. I erroneously assumed it was correct at first and no one pointed out the error until months later. Failures in GPT-3’s reasoning tend to be very humanlike - errors you could imagine not very “math-y” students making, to paraphrase Leo Gao. [↩︎](#fnref:3) 4. These are uncurated GPT-3 continuations of the last few paragraphs of this blog post, with a couple of things slightly rearranged, like the Star Maker quote from the next section included alongside Gwern’s quote. [↩︎](#fnref:4) 5. It took me several weeks of playing AI Dungeon to get to the point of producing consistently high-quality fiction with GPT-3, but I was also more absorbed with exploration and less concerned with the final quality of the writing in the beginning. [↩︎](#fnref:5) 6. That is, it’s my hypothesis. I’ll do an experiment at some point to verify that the inception of loops generally coincides with an indecisive likelihood distribution over the other tokens. [↩︎](#fnref:6)
094ed98b-cbf6-4adf-b7fd-9822f4205cfa
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
DeepMind: Generally capable agents emerge from open-ended play Also see [paper](https://deepmind.com/research/publications/open-ended-learning-leads-to-generally-capable-agents) and [results compilation video](https://www.youtube.com/watch?v=lTmL7jwFfdw)! > Today, we published "[Open-Ended Learning Leads to Generally Capable Agents](https://deepmind.com/research/publications/open-ended-learning-leads-to-generally-capable-agents)," a preprint detailing our first steps to train an agent capable of playing many different games without needing human interaction data. ... The result is an agent with the ability to succeed at a wide spectrum of tasks — from simple object-finding problems to complex games like hide and seek and capture the flag, which were not encountered during training. We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task. > > ... > The neural network architecture we use provides an attention mechanism over the agent’s internal recurrent state — helping guide the agent’s attention with estimates of subgoals unique to the game the agent is playing. We’ve found this goal-attentive agent (GOAT) learns more generally capable policies. > > ... > Playing roughly 700,000 unique games in 4,000 unique worlds within XLand, each agent in the final generation experienced 200 billion training steps as a result of 3.4 million unique tasks. At this time, our agents have been able to participate in every procedurally generated evaluation task except for a handful that were impossible even for a human. And the results we’re seeing clearly exhibit general, zero-shot behaviour across the task space — with the frontier of normalised score percentiles continually improving. > > > Looking qualitatively at our agents, we often see general, heuristic behaviours emerge — rather than highly optimised, specific behaviours for individual tasks. Instead of agents knowing exactly the “best thing” to do in a new situation, we see evidence of agents experimenting and changing the state of the world until they’ve achieved a rewarding state. We also see agents rely on the use of other tools, including objects to occlude visibility, to create ramps, and to retrieve other objects. Because the environment is multiplayer, we can examine the progression of agent behaviours while training on held-out [social dilemmas](https://deepmind.com/blog/article/understanding-agent-cooperation), such as in a game of “[chicken](https://en.wikipedia.org/wiki/Chicken_(game))”. As training progresses, our agents appear to exhibit more cooperative behaviour when playing with a copy of themselves. Given the nature of the environment, it is difficult to pinpoint intentionality — the behaviours we see often appear to be accidental, but still we see them occur consistently. > > My hot take: This seems like a somewhat big deal to me. It's what I would have predicted, but that's scary, given my timelines. I haven't read the paper itself yet but I look forward to seeing more numbers and scaling trends and attempting to extrapolate... When I do I'll leave a comment with my thoughts. EDIT: My warm take: The details in the paper back up the claims it makes in the title and abstract. This is the GPT-1 of agent/goal-directed AGI; it is the proof of concept. Two more papers down the line (and a few OOMs more compute), and we'll have the agent/goal-directed AGI equivalent of GPT-3. Scary stuff.
3f55d289-10d1-4587-b26f-23ec7e95fe06
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Is Deontological AI Safe? [Feedback Draft] [Note: This post is an excerpt from a longer paper, written during the first half of the Philosophy Fellowship at the Center for AI Safety. I (William D'Alessandro) am a Postdoctoral Fellow at the Munich Center for Mathematical Philosophy. Along with the other Philosophy Fellowship midterm projects, this draft is posted here for feedback. The full version of the paper includes a discussion of the conceptual relationship between safety and moral alignment, and an argument that we should choose a reliably safe powerful AGI over one that's (apparently) successfully morally aligned. I've omitted this material for length but can share it on request. The deontology literature is big, and lots of angles here could be developed further. Questions and suggestions much appreciated!] .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} 1 Introduction[[1]](#fncrjp31ukbca) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- *Value misalignment* arguments for AI risk observe that artificial agents needn’t share human ideas about what sorts of ends are intrinsically good and what sorts of means are morally permissible. Without such values for guidance, a powerful AI might turn its capabilities toward human-unfriendly goals. Or it might pursue the objectives we’ve given it in dangerous and unforeseen ways. Thus, as Bostrom writes, “Unless the plan is to keep superintelligence bottled up forever, it will be necessary to master motivation selection” (Bostrom 2014, 185). Indeed, since more intelligent, autonomous AIs will be favored by competitive pressures over their less capable kin (Hendrycks 2023), the hope of keeping AI weak indefinitely is probably no plan at all. Considerations about value misalignment plausibly show that equipping AIs with something like human morality is a necessary step toward AI safety. It’s natural to wonder whether moral alignment might also be *sufficient* for safety, or nearly so. Would an AI guided by an appropriate set of ethical principles be unlikely to cause disastrous harm by default? This is a tempting thought. By the lights of common sense, morality is strongly linked with trustworthiness and beneficence; we think of morally exemplary agents as promoting human flourishing while doing little harm. And many moral systems include injunctions along these lines in their core principles. It would be convenient if this apparent harmony turned out to be a robust regularity. Deontological morality[[2]](#fnpkxp5sqezed) looks like an especially promising candidate for an alignment target in several respects. It’s perhaps the most popular moral theory among both professional ethicists[[3]](#fne9bd7ouztft)  and the general public[[4]](#fn4kxymn25o7a). It looks to present a relatively tractable technical challenge in some respects, as well-developed formal logics of deontic inference exist already, and large language models have shown promise at classifying acts into deontologically relevant categories (Hendrycks et al. 2021). Correspondingly, research has begun on equipping AIs with deontic constraints via a combination of top-down and bottom-up methods (Kim et al. 2021). Finally, deontology appears more inherently safety-friendly than its rivals, since many deontological theories posit strong harm-avoidance principles. (By contrast, standard forms of consequentialism recommend taking unsafe actions when such acts maximize expected utility. Adding features like risk-aversion and future discounting may mitigate some of these safety issues, but it’s not clear they solve them entirely.) I’ll argue that, unfortunately, deontological morality is no royal road to safe AI. The problem isn’t just the trickiness of achieving complete alignment, and the chance that partially aligned AIs will exhibit risky behavior. Rather, there’s reason to think that deontological AI might pose distinctive safety risks of its own.[[5]](#fnktaiu3jb7r) This suggests that existential catastrophes from AI are harder to avoid than one might have expected—if deontology fails to deliver safety, it’s unclear what approach to moral alignment would succeed.[[6]](#fnxf4ftkya22e) 2 Deontology and safety ----------------------- I consider three prominent forms of deontology: moderate views based on harm-benefit asymmetry principles, contractualist views based on consent requirements, and non-aggregative views based on separateness-of-persons considerations. I describe how the first two lead easily to anti-natalism and similar conclusions, and discuss some ways in which an AI aligned with such theories might endanger humanity. Non-aggregative deontology, meanwhile, has no meaningful safety features to speak of. This analysis is motivated by the thought that, if deontological morality is used as an AI alignment target, the choice of which particular principles to adopt will likely be influenced by the facts about which versions of deontology are best developed and most widely endorsed by relevant experts. In particular, other things being equal, I'd expect sophisticated deontological theories with many proponents to provide more attractive touchstones for alignment purposes. So it's reasonable to start with these theories. Below I’ll sidestep tricky questions about what sorts of outcomes count as harms, and how much AI-caused harm we should be prepared to tolerate, by focusing on x-risk scenarios involving human extinction or disempowerment. I assume that avoiding these outcomes is a goal of AI safety on anyone's view. ### 2.1 Harm-benefit asymmetries, anti-natalism and paralysis *Strict* deontological theories hold that certain types of action are always morally required or prohibited regardless of their consequences. Classical Kantian deontology is strict insofar as it recognizes “perfect duties” admitting of no exceptions (e.g. duties not to lie, murder or to commit suicide), which Kant saw as deriving from a universal categorical imperative. Though perhaps the most recognizable form of deontology, strict views have well-known unpalatable consequences—that it’s wrong to kill one innocent even in order to save a million others, say—and so contemporary versions of deontology often refrain from positing exceptionless general rules. Instead, many moderate deontologists endorse *harm-benefit asymmetry* (HBA) principles.[[7]](#fn9c321weh0k7) According to these principles, the moral reasons against harming in a particular way are much stronger (though not infinitely stronger) than the moral reasons in favor of benefiting in a corresponding way. Thus it’s unacceptable to kill one to save one, for instance, but it may be acceptable to kill one to save a million.[[8]](#fn4a80ykm4u63) Deontologists frequently accept a related principle in population ethics, which can be viewed as an instance of the general HBA. This is the *procreation asymmetry*, according to which we have strong moral reasons against creating people with bad lives, but only weak (or perhaps no) moral reasons in favor of creating people with good lives.[[9]](#fn85vottkivnv) Harm-benefit asymmetry principles seem innocuous. But there are several ways in which such principles (perhaps in tandem with other standard deontological commitments) may render human extinction morally appealing. Consequently, a powerful AI aligned with deontological morality could pose an existential threat. The general idea behind these inferences is that, if avoiding harms is much more important than promoting benefits, then the optimal course in a variety of situations may be to severely curtail one’s morally significant effects on the future. Doing so has the large upside that it minimizes the harms one causes in expectation; the fact that it also minimizes the benefits one causes is a comparatively minor downside. The surest way to limit one’s effects on the future, in turn, is to avoid taking many kinds of actions, and perhaps also to restrict others’ actions in appropriate ways. The maximally foolproof scenario may then be one in which nobody exists to take any harm-causing actions at all. I’ll discuss a few specific forms of this reasoning below. Perhaps the most well-known way to derive the desirability of extinction from deontological premises is the anti-natalist family of arguments associated with David Benatar, which aim to show that procreation is morally unacceptable. (Benatar 2006) argues, roughly, that most human lives are very bad, and so bringing a new person into existence causes that person impermissible harm. On the other hand, abstaining from procreation isn’t bad in any respect: by the strong form of the procreation asymmetry, we do nothing wrong in not creating a potentially good life, while we do something right in not creating a potentially bad life. So abstaining from procreation is the only permissible choice. As Benatar is well aware, this conclusion entails that “it would be better if humans (and other species) became extinct. All things being equal... it would [also] be better if this occurred sooner rather than later” (194). Quite a few philosophers have found this argument convincing.[[10]](#fn48k9rrp1e6q) Deontologists who accept the general HBA are confronted by an even stronger version of the argument, however. This version doesn’t require one to accept, as Benatar does, that most lives are extremely bad. Instead, one only has to think that the goods in a typical life don’t outweigh the bads to an appropriately large degree—a much weaker and more plausible claim. This HBA-based version of the anti-natalist argument goes as follows: 1. Procreation causes a person to exist who will experience both pains and pleasures. 2. Causing (or helping cause) pains is a type of harming, while causing (or helping cause) pleasures is a type of benefiting. 3. By the HBA, harmful acts are impermissible unless their benefits are dramatically greater than their harms. 4. It’s not the case that the benefits of procreation are dramatically greater than the harms (for the person created, in expectation). 5. Therefore procreation is impermissible. The above is Benatar’s so-called “philanthropic” argument for anti-natalism, so called because it focuses on avoiding harms to one’s prospective offspring. (Benatar 2015) also offers a “misanthropic” argument motivated in a different way by the HBA. This argument focuses on the large amounts of pain, suffering and death caused by humans. While it’s true that people also do some good, Benatar claims that the badness of creating a likely harm-causer morally outweighs the goodness of creating a likely benefit-causer. As before, by the HBA, this conclusion follows even if the expected benefits caused by one’s descendants outnumber the expected harms. A noteworthy variant of this style of reasoning appears in (Mogensen & MacAskill 2021). Mogensen and MacAskill’s “paralysis argument” aims to show that, given standard deontological asymmetries, it’s morally obligatory to do as little as possible.[[11]](#fnk85eamx9wl) The conclusion of the paralysis argument implies anti-natalism but is much stronger, since it restricts almost all types of action. In addition to the HBA, MacAskill and Mogensen’s argument assumes an asymmetry between *doing* and *allowing* harm. This is the claim that the moral reasons against causing a harm are stronger than the reasons against merely allowing the same type of harm to occur.[[12]](#fnwfxubipewvf) The principle explains why, for instance, it seems impermissible to harvest one person’s organs to save three others, but permissible to forgo saving one drowning person in order to save three. The paralysis argument runs as follows. Many everyday actions are likely to have “identity-affecting” consequences—they slightly change the timing of conception events, and thus cause a different person to exist than the one who otherwise would have. By (partly) causing this person’s existence, you ipso facto (partly) cause them to have all the experiences they’ll ever have, and all the effects they’ll have on others. Similarly for the experiences of their descendants and their effects on others, and so on. Many of these long-term consequences will involve harms in expectation. So we have strong moral reasons against performing identity-affecting acts. While it’s also true that such acts cause many benefits, it’s unlikely that the benefits will vastly outweigh the harms. So identity-affecting acts are prohibited by the HBA. Of course, many people will still suffer harms even if you do nothing at all. But in this case you’ll merely be allowing the harms rather than causing them. By the doing-allowing asymmetry, your reasons against the former are much weaker than your reasons against the latter, so inaction is strongly preferable to action. Hence paralysis—or, more specifically, doing one’s best not to perform potentially identity-affecting acts—seems to be morally required. Benatarian anti-natalism and the paralysis argument are thematically similar. What both lines of thought point to is the observation that creating new lives is extremely morally risky, whereas not doing so is safer (and doing nothing at all is safest yet). The HBA and similar deontological principles can be viewed as risk-avoidance rules. In various ways, they favor acts with low moral risk (even if those acts also have low expected moral reward) over acts with high risk (even if those acts have high expected reward). In their strongest forms, they insist that expected benefits carry no weight whatsoever, as in the version of the procreation asymmetry which denies we have any moral reason to create happy people. In their more modest forms, the asymmetries simply impose a very high bar on potentially harm-causing action, and a much lower bar on inaction. How might an AI guided by these or similar deontic principles pose an existential threat to humans? One might think such an AI would simply try to curb its *own* behavior in the relevant ways—by refusing to directly participate in creating new sentient beings, or by acting as little as possible, say—without interfering with others. But this isn’t the only possibility. (And in any case, an AI that disregards many of its designers’ or users’ requests is likely to be replaced rather than left to act out its moral principles.) How an AI would choose to act on deontological principles depends partly on its attitude toward the “paradox of deontology” (Scheffler 1982). This is the observation that deontological theory faces a dilemma when considering whether to perform a prohibited act in order to prevent even more occurrences of such acts—say, killing one to prevent five additional killings. According to the most popular answer to the paradox, deontological restrictions should be understood as “agent-relative”, in that they concern what each person ought to do from their own viewpoint rather than how the world as a whole ought to be. An AI committed to agent-relative deontology presumably wouldn’t kill all humans to prevent them from procreating, then, even if it judged procreation to be morally impermissible. But there are other avenues by which an anti-natalist (or pro-paralysis) AI might threaten humanity.[[13]](#fnp6133buxyqs) Let me discuss two. 1. First, the agent-relativity of deontology is often taken to bind agents to submit their *own* future acts to the relevant rules, if not the acts of others. For instance, a deontic restriction on killing might take the form “each agent should ensure that she does not kill innocent people” (Hammerton 2017, 319). Understood in this way, it may be appropriate for an AI to take precautions now to prevent its future self from acting impermissibly. Suppose such an AI suspects that humans will try to use it (or a version or instance of it) to aid in vastly increasing the number of sentient beings existing in the future—by helping develop technology for galaxy colonization, mass production of digital minds, or whatever.[[14]](#fna2c9llrsfgw) If such an AI is a committed anti-natalist, it will view these prospective future actions as abhorrent and strive to avoid performing them. What steps might it take do so? As stated, a rule like “ensure you don’t kill innocent people” is ambiguous. Several precisifications are possible. If the AI’s goal is simply to *minimize the total number of impermissible acts it expects to commit* in the future, for instance, its best bet may be to exterminate or disable humans before they can use it to help create many new beings. (Painlessly neutralizing ∼1010 to avoid a high probability of bringing ∼1023 or 1038 into existence is an easy choice from this viewpoint.[[15]](#fnq2c6cfck97)) This interpretation isn’t much in the spirit of mainstream deontology, but one could imagine a developer regarding it as a desirable safety feature inasmuch as it aims to minimize bad AI behavior.  Alternatively, and perhaps more plausibly, the AI’s goal may be to minimize the total number of impermissible acts it expects to commit in the future *without committing any impermissible acts in the process.* The AI’s behavior in this scenario will depend on what it judges to be impermissible, and how it weighs different kinds of wrongs against each other. For instance, it’s conceivable that sterilizing all humans by nonlethal means might count as permissible, at least relative to the much worse alternative of helping create countless new lives. Relevant here is Korsgaard’s interpretation of Kant, according to which “the task of Kantian moral philosophy is to draw up for individuals something analogous to Kant's laws of war: special principles to use when dealing with evil” (Korsgaard 1986, 349). On this view, immoral acts like lying are nevertheless permissible when behaving morally “would make you a tool of evil” (ibid.), as when a would-be murderer seeks to exploit your knowledge in the commission of their crime. An anti-natalist AI might see its situation in this way. In an ideal world, it would be best to live alongside humans in a peaceful Kingdom of Ends. But allowing itself to be used as a tool to bring about horrific death and suffering (via creating many new people) is unacceptable, and so neutralizing anyone who harbors such plans, though immoral, is justified as an act of self-defense. The framework of Ross-style pluralistic deontology provides another route to a similar conclusion (Ross 1930). Pluralism posits a number of basic rules, not necessarily of equal importance, whose demands weigh against one another to determine one’s all-things-considered duty in a given situation. (Ross himself posits a relatively weak duty of beneficence and a relatively strong duty of non-maleficence, anticipating moderate deontology and the HBA.) It’s compatible with pluralistic deontology that one has a strong pro tanto duty not to harm existing people, but an even stronger duty not to create larger numbers of future people who will suffer greater amounts of harm, so on balance it’s obligatory to do the former in order to avoid the latter. In a similar vein, (Immerman 2020) argues that it’s sometimes right to perform a morally suboptimal action now in order to avoid performing a sufficiently bad action with sufficiently high probability in the future, noting specifically that the argument goes through in a pluralistic deontology framework (3914, fn. 17). 2. It’s sometimes thought that, even if one accepts the agent-relativity of deontic rules, it would be unreasonable not to also recognize agent-neutral reasons for preferring worlds where the rules are generally followed. In other words, there seems to be a tension between accepting *It’s wrong for me to kill innocents* and yet rejecting *It’s better if fewer people (relevantly like me) kill innocents*. As Chappell writes, rejecting the latter claim “seems like just another way of saying that the restrictions don’t really matter, or at any rate seems incompatible with assigning them the sort of significance and importance that is normally associated with deontic constraints” (Chappell MS, 13). To the extent that a deontically aligned AI ascribes the constraints this sort of significance, we might expect it to show some interest in human compliance. How such an AI would behave depends on how it rates the strength of its agent-relative reasons for following the rules relative to the strength of its agent-neutral reasons for promoting general rule-following. In any scenario, though, the AI would clearly prefer a world in which *everyone* behaves permissibly over a world in which *only it* behaves permissibly. So if it can bring about fewer major wrongs without committing any major wrongs itself, the AI will aim to do so. What kinds of measures might be permitted for this purpose? As above, it’s conceivable that painless disempowerment or mass sterilization would be on the table; these might or might not count as unacceptable moral violations, depending on the AI’s particular deontic scruples. But it’s presumably acceptable on any view for the AI to try *persuading* humans of the rightness of anti-natalism. This could be more dangerous than it sounds. For one, the AI probably wouldn’t have to convince all or even many people, but only a relatively small group of leaders capable of persuading or coercing the rest of the population. For an AI with the “superpower of social manipulation” (Bostrom 2014, 94; Burtell & Woodside 2023), this might be a simple task.[[16]](#fnsi30w1awhf) But perhaps it’s not obvious whether voluntary extinction should count as a tragic outcome to be avoided at all costs. Such a scenario would be bad on some views—for instance, total utilitarians would oppose it, since it involves throwing away the great potential value of many future lives. But total utilitarianism is contentious. Are there more broadly appealing reasons for classifying voluntary extinction as a catastrophe? I think so. It’s significant that, in the scenario under consideration, the decision to go extinct is the result of a persuasion campaign by a highly motivated (and perhaps superhumanly convincing) agent, rather than a spontaneous and dispassionate deliberation process on our part. There’s no reason to assume that such an AI wouldn’t use all the strategic, manipulative, and perhaps deceptive means at its disposal in service of its cause. And I take it that an act of self-harm which is voluntary in some sense can still constitute a tragedy if the choice is made under sufficiently adverse conditions. For instance, many suicides committed under the influence of mental illness, cognitive impairment or social pressure seem to fall into this category. An AI-caused voluntary extinction would plausibly exhibit many of the same bad-making features. ### 2.2 Contractualism It’s worth noting that ethical anti-natalism can also be derived in contractualist and rights-based versions of deontology. Most views of these types hold that it’s impermissible to impose serious harms on someone without her consent—this can be viewed as a consequence of a respect-based social contract, or as a right against nonconsensual injury. The anti-natalist argument (defended in (Shiffrin 1999), (Harrison 2012) and (Singh 2012)) is that procreation causes serious harms to one’s offspring, who are in no position to give prior assent. Thus we have strong moral reasons against procreation. On the other hand, non-actual people don’t have rights and aren’t party to contracts[[17]](#fnavvk99n1jdl), so remaining childless violates nobody. What actions might an AI take which favored anti-natalism on contractualist or rights-based grounds? Broadly speaking, the above discussion also applies to these cases: if the AI aims to minimize at all costs the total number of social contract or rights violations it expects to commit in the future, it might be willing to preemptively exterminate or disempower humans, while if it aims to minimize future violations subject to constraints, it may instead pursue its goals via persuasion or other less directly harmful means. Compared to HBA-based standard deontology, one might suspect that contractualist deontology is relatively safe. This is because what’s permissible according to contractualism depends on which principles people would (or wouldn’t) reasonably agree to, and it might seem that few people would accept principles mandating human extinction. (Scanlon puts this criterion as follows: “An act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behaviour that no one could reasonably reject as a basis for informed, unforced, general agreement” (Scanlon 1998, 153).) But much depends on which rejections an AI considers reasonable. If it assigns probability 1 to its moral principles and believes that anti-natalism logically follows from those principles, it might view human dissent as irrational and hence inconsequential. On the other hand, it might view a principle like “do what’s necessary to prevent millions of generations of future suffering” as rationally mandatory. The contractualist literature offers further evidence that the view isn’t intrinsically safety-friendly. (Finneron-Burns 2017) asks what would be wrong with human extinction from a Scanlonian viewpoint, and concludes that there’s no obvious moral objection to voluntary extinction. So a highly persuasive AI aligned with contractualist deontology would apparently do nothing wrong by its own lights in convincing humans to stop reproducing. (A possible complication is that it’s unclear what Finneron-Burns, or any contractualist, should count as voluntary in the relevant sense; cf. the discussion of voluntary extinction in §2.1 above.) ### 2.3 Non-aggregative deontology A very different approach to deontology than the sorts of views considered so far is the non-aggregative view associated with John Taurek ((Taurek 1977); see also (Doggett 2013)). While HBA-like principles aim to establish systematic moral relationships between harms and benefits of different sizes, non-aggregative deontology denies that numbers matter in this way. On this view, the death of one involves no greater harm than the death of two, ten or a million, and in general there’s no more moral reason to prevent the latter than to prevent the former.[[18]](#fn85br6yaod3s) How should non-aggregative deontologists approach decision situations involving unequal prospects of harms and benefits? Consider a choice between saving a few and saving many. Several views have been explored in the literature: for instance, that the non-aggregationist should “(1) save the many so as to acknowledge the importance of each of the extra persons; (2) conduct a weighted coin flip; (3) flip a [fair] coin; or (4) save anyone [arbitrarily]” (Alexander & Moore 2021). What option (1) recommends can be spelled out in various more specific ways. On the view of (Dougherty 2013), for instance, the deontologist is morally obliged to desire each stranger’s survival to an equal degree, and also rationally obliged to achieve as many of her equally-desired ends as possible, all else being equal. So saving the few instead of the many is wrong because it’s a deviation from ideal practical reasoning. It’s clear enough what this view implies when two options involve the same type of harm and differ only in the number of victims affected. What it recommends in more complex situations seems quite open. In particular, nothing appears to rule out an agent’s equally valuing the lives of all humans to some degree m, but valuing a distinct end incompatible with human life to a greater degree n (and acting on the latter). This is because the view gives no insight about how different kinds of harms should trade off against one another, or how harms should trade off against benefits. So there are few meaningful safety assurances to be had here. Not much needs to be said about options (2), (3) and (4), which wear their lack of safety on their sleeves. Of the three options, the weighted coin flip might seem most promising; it would at least be highly unlikely to choose a species-level catastrophe over a headache. But the odds of disaster in other situations are unacceptably high. Given a choice between, say, extinction and losing half the population, option (3) only gives 2:1 odds against extinction. Options (1) and (4) are even riskier. On the whole, non-aggregative deontology seems indifferent to safety at best and actively inimical to it at worst. ### 2.4 How safe is deontology, and could it be safer? I conclude from this discussion that many standard forms of deontology earn low marks for safety. Within the framework of so-called moderate deontology (based on harm-benefit, doing-allowing and procreation-abstention asymmetry principles), there’s a straightforward argument that creating new sentient beings involves morally unacceptable risks and that voluntary extinction is the only permissible alternative. Similar conclusions can be derived in rights-based and contractualist versions of deontology from prohibitions on nonconsensual harm. Meanwhile, non-aggregative theories simply lack the resources to classify x-risk and s-risk scenarios as uniquely bad. A powerful AI aligned primarily with one of these moral theories is, I think, a worryingly dangerous prospect. If one wanted to build a useful, broadly deontology-aligned AI with a much stronger safety profile, what sort of approach might one take? Perhaps the most obvious idea is to start with one’s preferred version of deontology and add a set of safety-focused principles with the status of strict, lexically first-ranked duties. But one might wonder about the coherence of such a system. For instance, if the base deontological theory includes a duty against harming, and if promoting anti-natalism is the only satisfactory way to fulfill this duty, but the additional safety rules forbid promoting anti-natalism, it’s unclear how an agent trying to follow both rules at once would or should proceed. In any case, finding a reasonable way to incorporate absolute prohibitions into a broader decision theory is a difficult problem about which much has been written (e.g. (Jackson & Smith 2006), (Aboodi et al. 2008), (Huemer 2010), (Lazar & Lee-Stronach 2019)). One tricky issue is risk. We want to prohibit our AI from performing unduly harmful acts, but how should we handle acts that merely have some middling risk of unsafe outcomes? A naive solution is to prohibit any behavior with a nonzero probability of causing serious harm. But virtually every possible act fits this description, so the naive method leaves the AI unable to act at all. If we instead choose some threshold t such that acts which are safe with probability p>t are permitted, this doesn’t yet provide any basis for preferring the less risky or less harmful of two prohibited acts. (Given a forced choice between causing a thousand deaths and causing human extinction, say, it’s crucial that the AI selects the former.) Also, of course, any such probability threshold will be arbitrary, and sometimes liable to criticism for being either too high or too low. Work on these issues continues, but no theory has yet gained wide acceptance or proven immune to problem cases. (Barrington MS) proposes five desiderata for an adequate account: “The correct theory will prohibit acts with a sufficiently high probability of violating a duty, irrespective of the consequences... but [will] allow sufficiently small risks to be justified by the consequences... It will tell agents to minimize the severity of duty violations... while remaining sensitive to small probabilities... And it will instruct agents to uphold higher-ranking duties when they clash with lower-ranking considerations” (12). Some future account might meet these and other essential desiderata. At present, though, there’s no easy and uncontentious way to render deontology safe by adding prohibitions on undesired behavior. Another option is to considerably weaken the asymmetries associated with moderate deontology, so that the negative value of harming (and, in particular, of creating people likely to suffer harm) doesn’t so easily overwhelm the positive value of benefiting. For instance, one might adopt the principle that a harm of magnitude m has merely “twice the weight” of a benefit of magnitude m. Within this sort of framework, procreation might turn out permissible, provided that its expected benefits are at least “double” its expected harms. But there’s an obvious issue with this approach: the closer one gets to putting harms and benefits on equal footing, the more one appears to be seeking impersonally good outcomes, and so the more one’s theory starts to look like consequentialism rather than deontology. Perhaps there’s some principled tuning of the asymmetries that preserves the spirit of deontology while avoiding the unsafe excesses of extreme harm avoidance. But it’s not clear what such a view would look like.[[19]](#fnidiazkcvxrc) 3 Conclusion ------------ In many ways, deontological restrictions appear to represent the most promising route to achieving safe AI via moral alignment. But if the arguments given here are right, then equipping an AI with a plausible set of harm-averse moral principles may not be enough to ward off catastrophe, even when those principles have been vetted by generations of thoughtful philosophers. This casts doubt on the usefulness of moral alignment methods in general as a tool for mitigating existential risk. References ---------- Aboodi, Ron, Adi Borer and David Enoch. 2008. “Deontology, individualism, and uncertainty: A reply to Jackson and Smith.” *Journal of Philosophy* 105, 259-272. Alexander, Larry and Michael Moore. 2021. “Deontological ethics.” In Edward N. Zalta (ed.), *The Stanford Encyclopedia of Philosophy* (Winter 2021 Edition), URL = <https://plato.stanford.edu/archives/win2021/entries/ethics-deontological/>. Algander, Per. 2012. “A defence of the asymmetry in population ethics.” *Res Publica* 18, 145-157. Alm, David. 2009. “Deontological restrictions and the good/bad asymmetry.” *Journal of Moral Philosophy* 6, 464-481. Barrington, Mitchell. MS. “Filtered maximization.” Belshaw, Christopher. 2012. “A new argument for anti-natalism.” *South African Journal of Philosophy* 31, 117-127. Benatar, David. 2006. *Better Never to Have Been: The Harm of Coming into Existence*. Oxford: Oxford University Press. Benatar, David. 2015. “The misanthropic argument for anti-natalism.” In Sarah Hannon, Samantha Brennan and Richard Vernon (eds.), *Permissible Progeny? The Morality of Procreation and Parenting*, Oxford: Oxford University Press, 34-59. Bostrom, Nick. 2003. “Astronomical waste: The opportunity cost of delayed technological development.” *Utilitas* 15, 308-314. Bostrom, Nick. 2014. *Superintelligence: Paths, Dangers, Strategies*. Oxford: Oxford University Press. Bourget, David and David Chalmers. 2023. “Philosophers on philosophy: The 2020 PhilPapers survey.” *Philosophers’ Imprint.* Burtell, Matthew and Thomas Woodside. 2023. “Artificial influence: An analysis of AI-driven persuasion.” arXiv:2303.08721. Chappell, Richard Yetter. MS. “Preference and prevention: A new paradox of deontology.” Cohen, Daniel. 2020. “An actualist explanation of the procreation asymmetry.” *Utilitas* 32, 70-89. Cohen, G.A. 1995. *Self-Ownership, Freedom, and Equality*. Cambridge: Cambridge University Press. Cohen, Andrew Jason and Lauren Hall. 2022. “Libertarianism, the family, and children.” In Benjamin Ferguson and Matthew Zwolinski (eds.), *The Routledge Companion to Libertarianism*, New York: Routledge, 336-350. Cole, Kevin. 2019. “Real-world criminal law and the norm against punishing the innocent: Two cheers for threshold deontology.” In Heidi M. Hurd (ed.), *Moral Puzzles and Legal Perspectives*, Cambridge: Cambridge University Press, 388–406. Cook, Tyler. 2018. “Deontologists can be moderate.” *Journal of Value Inquiry* 52, 199-212. Doggett, Tyler. 2013. “Saving the few.” *Noûs* 47, 302-315. Dougherty, Tom. 2013. “Rational numbers: A non-consequentialist explanation of why you should save the many and not the few.” *Philosophical Quarterly* 63, 413-427. Finneron-Burns, Elizabeth. 2017. “What’s wrong with human extinction?” *Canadian Journal of Philosophy* 47, 327-343. Hammerton, Matthew. 2017. “Is agent-neutral deontology possible?” *Journal of Ethics and Social Philosophy* 12, 319-324. Harrison, Gerald. 2012. “Antinatalism, asymmetry, and an ethic of prima facie duties.” *South African Journal of Philosophy* 31, 94-103. Hendrycks, Dan. 2023. “Natural selection favors AI over humans.” arXiv:2303.16200. Hendrycks, Dan, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song and Jacob Steinhardt. 2021. “Aligning AI with shared human values.” In *International Conference on Learning Representations*, 2021. Hereth, Blake and Anthony Ferrucci. 2021. “Here’s not looking at you, kid: A new defense of anti-natalism.” *South African Journal of Philosophy* 40, 14-33. Huemer, Michael. 2010. “Lexical priority and the problem of risk.” Pacific Philosophical Quarterly 91, 332-351. Kagan, Shelly. 1989. *The Limits of Morality*. Oxford: Oxford University Press. Kamm, Frances. 1989. “Harming some to save others.” *Philosophical Studies* 57, 227-260. Kamm, Frances. 2007. *Intricate Ethics*. Oxford: Oxford University Press. Kim, Tae Wan, John Hooker and Thomas Donaldson. 2021. “Taking principles seriously: A hybrid approach to value alignment in artificial intelligence.” *Journal of Artificial Intelligence Research* 70, 871-890. Korsgaard, Christine. 1986. “The right to lie: Kant on dealing with evil.” *Philosophy & Public Affairs* 15, 325-349. Immerman, Daniel. 2020. “How should we accommodate our future misbehavior? The answer turns on how bad it will be.” *Philosophical Studies* 177, 3903-3922. Jackson, Frank and Michael Smith. 2006. “Absolutist moral theories and uncertainty.” *Journal of Philosophy* 103, 267-283. Johnson, Christa M. 2020. “How deontologists can be moderate.” *Journal of Value Inquiry* 54, 227-243. Lazar, Seth and Chad Lee-Stronach. 2019. “Axiological absolutism and risk.” *Noûs* 53, 97-113. Licon, Jimmy Alfonso. 2012. “The immorality of procreation.” *Think* 11, 85-91. Mack, Eric. 1995. “The self-ownership proviso: A new and improved Lockean proviso.” *Social Philosophy and Policy* 12, 186–218. McMahan, Jeff. 1981. “Problems of population theory.” *Ethics* 92, 96-127. Metzinger, Thomas. 2017. “Benevolent artificial anti-natalism (BAAN).” *Edge*, May 17 2017, URL = <https://www.edge.org/conversation/thomas\_metzinger-benevolent-artificial-anti-natalism-baan>, accessed May 11 2023. Miller, Lantz Fleming. 2021. “Kantian approaches to human reproduction: Both favorable and unfavorable.” *Kantian Journal* 40, 51-96. Mogensen, Andreas and William MacAskill. 2021. “The paralysis argument.” *Philosophers’ Imprint* 21, 1-17. Narveson, Jan. 1988. *The Libertarian Idea*. Philadelphia: Temple University Press. Nozick, Robert. 1974. *Anarchy, State, and Utopia*. New York: Basic Books. Quinn, Warren. 1989. “Actions, intentions, and consequences: The doctrine of doing and allowing.” *Philosophical Review* 98, 287-312. Roberts, Melissa. 2011. “An asymmetry in the ethics of procreation.” *Philosophy Compass* 6, 765-776. Rosenthal, Chelsea. 2018. “Why desperate times (but only desperate times) call for consequentialism.” In Mark Timmons (ed.), *Oxford Studies in Normative Ethics*, Vol. 8, Oxford: Oxford University Press, 211-235. Ross, W.D. 1930. *The Right and the Good*. Oxford: Oxford University Press. Scanlon, T. M. 1998. *What We Owe to Each Other*. Cambridge: Harvard University Press. Scheffler, Samuel. 1982. *The Rejection of Consequentialism: A Philosophical Investigation of the Considerations Underlying Rival Moral Conceptions*. Oxford: Oxford University Press. Scheffler, Samuel. 2004. “Doing and allowing.” *Ethics* 114, 215-239. Shiffrin, Seana Valentine. 1999. “Wrongful life, procreative responsibility, and the significance of harm.” *Legal Theory* 5, 117-148. Singh, Asheel. 2012. “Furthering the case for anti-natalism: Seana Shiffrin and the limits of permissible harm.” *South African Journal of Philosophy* 31, 104-116. Spencer, Jack. 2021. “The procreative asymmetry and the impossibility of elusive permission.” *Philosophical Studies* 178, 3819-3842. Taurek, John. 1977. “Should the numbers count?” *Philosophy and Public Affairs* 6, 293-316. Woollard, Fiona and Frances Howard-Snyder. 2022. “Doing vs. allowing harm.” In Edward N. Zalta and Uri Nodelman (eds.), *The Stanford Encyclopedia of Philosophy* (Winter 2022 Edition), URL = <https://plato.stanford.edu/archives/win2022/entries/doing-allowing/>. 1. **[^](#fnrefcrjp31ukbca)**Thanks to Dan Hendrycks and the Center for AI Safety 2023 Philosophy Fellows for discussion. Special thanks to Mitch Barrington, Simon Goldstein, Nick Laskowski and Nate Sharadin for very helpful ideas, comments and references. 2. **[^](#fnrefpkxp5sqezed)**Broadly speaking, deontological theories hold that we have moral duties and permissions to perform (or refrain from performing) certain kinds of acts, and these duties and permissions aren’t primarily grounded in the impersonal goodness of the acts’ consequences. 3. **[^](#fnrefe9bd7ouztft)**As per the 2020 PhilPapers survey; see [here](https://survey2020.philpeople.org/survey/results/4890?aos=30) and [here](https://survey2020.philpeople.org/survey/results/4890?aos=28) for results from normative ethicists and meta-ethicists respectively. Among surveyed professional philosophers in general, virtue ethics was the most favored theory (Bourget & Chalmers 2023). 4. **[^](#fnref4kxymn25o7a)**Because most people in the world are religious, and the ethics of the major religions is largely deontological. 5. **[^](#fnrefktaiu3jb7r)**I don’t claim that moral alignment methods could never produce safe behavior even in principle. On the contrary, the space of possible moralities is large, and it would be a surprise if it contained only rules that might jeopardize human survival. It’s also not my view that pursuing moral alignment is pointless, still less that it’s intrinsically harmful and a bad idea. There are excellent reasons to want AIs to behave morally in many scenarios. Equipping AIs with deontological restrictions may be a good way to achieve some of these goals. I claim only that alignment with standard forms of deontology doesn’t straightforwardly ensure a high degree of safety. 6. **[^](#fnrefxf4ftkya22e)**To be clear, it’s not my view that people who defend the moral frameworks discussed below are themselves committed to the goodness or permissibility of human extinction. Some are so committed, and happily admit as much—cf. the discussion of anti-natalism below. For most of us, though, moral theorizing comes with a healthy dose of uncertainty and confusion, and we often tentatively endorse a certain general idea without fully embracing (or even being sure we understand) all of its consequences. In particular I suspect that, if the average person became convinced that some version of their favorite ethical theory condoned existentially risky acts, they would take this as strong evidence against that version of the theory. The difference between humans and AI on this score is that we can’t rely on AI to modulate its beliefs and behavior in light of common sense, uncertainty, risk aversion, social pressure, and other forces that pull typical humans away from (acting on) moral principles with potentially disastrous consequences. 7. **[^](#fnref9c321weh0k7)**See for instance (Alm 2009), (Cook 2018), (Johnson 2020), (Kagan 1989), (Kamm 1989), (Ross 1930). 8. **[^](#fnref4a80ykm4u63)**A related but distinct idea is “threshold deontology”, which holds that deontological prohibitions are operative up to a limit of sufficiently large negative consequences, while consequentialist norms come into force above this limit. Cf. (Cole 2019) and (Rosenthal 2018). Since the cases discussed below involve very large (purportedly) negative consequences, the arguments pertaining to these cases presumably don’t apply to threshold deontology, which would handle the cases in a consequentialist fashion. (Safety worries about consequentialism would apply instead.) 9. **[^](#fnref85vottkivnv)**See for instance (Algander 2012), (Cohen 2020), (Harrison 2012), (McMahan 1981), (Roberts 2011), (Spencer 2021). 10. **[^](#fnref48k9rrp1e6q)**Philosophical defenses of anti-natalism broadly aligned with Benatar include (Belshaw 2012), (Harrison 2012), (Licon 2012), (Singh 2012), (Hereth & Ferrucci 2021). (Miller 2021) finds considerable support for anti-natalism in Kant. 11. **[^](#fnrefk85eamx9wl)**The ultimate goal of (Mogensen & MacAskill 2021) isn’t to defend the soundness of the paralysis argument, but to put pressure on deontologists to either modify their views or embrace altruistic longtermism. 12. **[^](#fnrefwfxubipewvf)**This asymmetry is also accepted by many deontologists.See for instance (Hill 2018), (Kamm 2007), (Quinn 1989), (Scheffler 2004), (Woollard & Howard-Snyder 2022) and the many references in the latter. 13. **[^](#fnrefp6133buxyqs)**An informal discussion of some related issues appears in (Metzinger 2017). Metzinger’s essay details a hypothetical artificial superintelligence which comes to embrace anti-natalism on the basis of careful scientific investigation and deep reflection on human values. This thought experiment, Metzinger suggests, shows that human extinction from ethically motivated AI is a possibility worth taking seriously (and that it constitutes a type of existential risk distinct from “[the worry] that an advanced machine intelligence could develop goals that are incompatible with human survival and well-being, or the merely technical issue that many of our own goals, when implemented in a superintelligence of our own making, could lead to unforeseen and undesirable consequences”). While I find Metzinger’s scenario interesting and plausible, his analysis differs from mine in some important ways. For instance, it’s not clear whether Metzinger thinks we should avoid creating a benevolently anti-natalist AI, or whether we’d be justified in resisting it once created; he suggests that our opposition to extinction is rooted in a cognitive bias toward self-preservation which we might do better to overcome. 14. **[^](#fnrefa2c9llrsfgw)**These are precisely the sorts of goals that many longtermist thinkers and technologists hope to achieve with the help of advanced AI, so such suspicions may be well-founded. 15. **[^](#fnrefq2c6cfck97)**For these estimates, see (Bostrom 2003). 16. **[^](#fnrefsi30w1awhf)**The persuasion scenario isn’t the only one imaginable. Other kinds of non-coercive pressure, perhaps applied via social engineering measures over longer timescales, could substitute in for rhetorical mind-changing. Alternatively, an AI might seek a way to (ethically) acquire a large share of Earth’s civilization-sustaining resources and refuse to sell to humans at affordable prices, making procreation an economically unappealing prospect. Which courses of action are permissible or preferred depends on how the AI conceptualizes harm, coercion and related notions, as well as the details of its deontological framework. 17. **[^](#fnrefavvk99n1jdl)**On Scanlon’s view, actual future people are parties to the social contract, so we’re obligated to take their interests into account. But merely possible people who never come into existence presumably have no rights or interests. 18. **[^](#fnref85br6yaod3s)**Taurek’s account is based on a thesis about the *separateness of persons*. Roughly, the idea is that each person only suffers her own harm, and there’s nobody for whom the collective harms done to ten people is ten times as bad. (“Suffering is not additive in this way. The discomfort of each of a large number of individuals experiencing a minor headache does not add up to anyone's experiencing a migraine” (Taurek 1977, 308).) 19. **[^](#fnrefidiazkcvxrc)**A family of theories which may lack at least some of the problematic features discussed above is libertarian deontology, focused on the right to self-ownership and corresponding duties against nonconsensual use, interference, subjugation and the like (Nozick 1974, Narveson 1988, Cohen 1995, Mack 1995). Libertarian deontology arguably doesn’t lead straight to anti-natalism or paralysis via the standard route. While creating a new person unavoidably causes many harms (in expectation), it’s less obvious that it must involve impermissible use of the person created. Whether or not it does depends, for instance, on whether raising a child inevitably infringes on her self-ownership rights, and whether children fully possess such rights in the first place. Libertarians are divided on these issues (Cohen & Hall 2022), although some explicitly oppose procreation on the grounds that it exploits infants and young children in unacceptable ways (Belshaw 2012). There’s space for many different views here, some but not all of which may offer better safety assurances than the views I’ve discussed. A further choice point is whether one regards libertarian deontology as a comprehensive account of morality or a theory of political or legal duties in particular. More detailed analysis would clarify some of these issues. But it looks doubtful that there’s a simple route to safety in the vicinity.
747bad73-9e0d-4744-a6eb-9ad2e07b8212
trentmkelly/LessWrong-43k
LessWrong
Some Criticisms of the Logical Induction paper
237f59c8-aab9-4506-9cc7-096809ef62db
trentmkelly/LessWrong-43k
LessWrong
Public Choice and the Altruist's Burden The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.   The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer. One effect of giving away lots of your money and effort to seriously efficient charity is that you create the counterpoint public choice problem to the special interests problem in politics. You harm a concentrated interest (friends, potential partners, children) in order to reward a diffuse interest (helping each of billions of people by a tiny amount). The concentrated interest then retaliates, because by standard public choice theory it has an incentive to do so, but the diffuse interest just ignores you. Concretely, your friends think that you're weird and potential partners may, in the interest of their own future children, refrain from involvement with you. People in general may perceive you as being of lower status, both because of your reduced ability to signal status via conspicuous consumption if you give a lot of money away, and because of the weirdness associated with the most efficient charities.  Anyone involved in futurism, singularitarianism etc, has probably been on the sharp end of this public choice problem. Presumably, anyone in the west who donated a socially optimal amount of money to charity (i.e. almost everything) would also be on the sharp end (though I know of no case
f83e7ac8-37ae-4dd8-a7b6-cfbe654c7b99
trentmkelly/LessWrong-43k
LessWrong
Bridging syntax and semantics with Quine's Gavagai Quine has an argument showing that you can never be sure what words mean in a foreign language: > Quine uses the example of the word "gavagai" uttered by a native speaker of the unknown language Arunta upon seeing a rabbit. A speaker of English could do what seems natural and translate this as "Lo, a rabbit." But other translations would be compatible with all the evidence he has: "Lo, food"; "Let's go hunting"; "There will be a storm tonight" (these natives may be superstitious); "Lo, a momentary rabbit-stage"; "Lo, an undetached rabbit-part." Some of these might become less likely – that is, become more unwieldy hypotheses – in the light of subsequent observation. What does this mean from the perspective of empirically bridging syntax and semantics? Well, there is no real problem with "gavagai" from the empirical perspective; the fact that there seems to be one is, in my view, due to the fact that the syntax-semantic discussion has focused too heavily on the linguistic aspects of the problem. Let G be the symbol in the Arunta speaker's brain that activates when they say "gavagai". As Quine said towards the end of his quote, "some of these might become less likely – that is, become more unwieldy hypotheses – in the light of subsequent observation." Relatedly, Nick Bostrom argues that indeterminancy is a "matter of degree". In practice, this means is that if, for instance, r="A rabbit is there" and s="A storm is coming tonight", then, if we accumulate enough observations, the G is going to be predictive of r better than it is of s, or vice versa. Thus there is an empirical test for whether G corresponds to some of these hypotheses. But what about u="An undetached rabbit-part is there"? It may be very difficult to find a situation that distinguishes between u and r in practice - so does G stand for "rabbit", or "undetached rabbit-part"? Similarly to the example of the neural net in the previous post, G is a symbol for both r and u. If no experiment can disting
297bfbf2-2df6-4ecd-a4eb-6e2204fc4ea4
StampyAI/alignment-research-dataset/blogs
Blogs
Advice for Authors I’ve spent much of the last few days reading various ICML papers and I find there’s a few pieces of feedback that I give consistently across several papers. I’ve collated some of these below. As a general note, many of these are about local style rather than global structure; I think that good local style probably contributes substantially more to readability than global structure and is in general under-rated. I’m in general pretty willing to break rules about global structure (such as even having a conclusion section in the first place! though this might cause reviewers to look at your paper funny), but not to break local stylistic rules without strong reasons. **General Writing Advice** * Be precise. This isn’t about being pedantic, but about maximizing information content. Choose your words carefully so that you say what you mean to say. For instance, replace “performance” with “accuracy” or “speed” depending on what you mean. * Be concise. Most of us write in an overly wordy style, because it’s easy to and no one drilled it out of us. Not only does wordiness decrease readability, it wastes precious space if you have a page limit. * Avoid complex sentence structure. Most research is already difficult to understand and digest; there’s no reason to make it harder by having complex run-on sentences. * Use consistent phrasing. In general prose, we’re often told to refer to the same thing in different ways to avoid boring the reader, but in technical writing this will lead to confusion. Hopefully your actual results are interesting enough that the reader doesn’t need to be entertained by your large vocabulary. **Abstract** * There’s more than one approach to writing a good abstract, and which one you take will depend on the sort of paper you’re writing. I’ll give one approach that is good for papers presenting an unusual or unfamiliar idea to readers. * The first sentence / phrase should be something that all readers will agree with. The second should be something that many readers would find surprising, or wouldn’t have thought about before; but it should follow from (or at least be supported by) the first sentence. The general idea is that you need to start by warming the reader up and putting them in the right context, before they can appreciate your brilliant insight. * Here’s an example from my [Reified Context Models](http://jmlr.org/proceedings/papers/v37/steinhardta15.html) paper: “A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision.” Note how the second sentence conveys a non-obvious claim — that coverage is important for confidence estimation as well as for indirect supervision. It’s tempting to lead with this in order to make the first sentence more punchy, but this will tend to go over reader’s heads. Imagine if the abstract had started, “In the context of inference algorithms, coverage of the space is important for confidence estimation and indirect supervision.” No one is going to understand what that means. **Introduction** * The advice in this section is most applicable to the introduction section (and maybe related work and discussion), but applies on some level to other parts of the paper as well. * Many authors (myself included) end up using phrases like “much recent interest” and “increasingly important” because these phrases show up frequently in academic papers, and they are vague enough to be defensible. Even though these phrases are common, they are bad writing! They are imprecise and rely on hedge words to avoid having to explain why something is interesting or important. * Make sure to provide context before introducing a new concept; if you suddenly start talking about “NP-hardness” or “local transformations”, you need to first explain to the reader why this is something that should be considered in the present situation. * Don’t beat around the bush; if the point is “A, therefore B” (where B is some good fact about your work), then say that, rather than being humble and just pointing out A. * Don’t make the reader wait for the payoff; spell it out in the introduction. I frequently find that I have to wait until Section 4 to find out why I should care about a paper; while I might read that far, most reviewers are going to give up about halfway through Section 1. (Okay, that was a bit of an exaggeration; they’ll probably wait until the end of Section 1 before giving up.) **Conclusion / Discussion** * I generally put in the conclusion everything that I wanted to put in the introduction, but couldn’t because readers wouldn’t be able to appreciate the context without reading the rest of the paper first. This is a relatively straightforward way to write a conclusion that isn’t just a re-hash of the introduction. * The conclusion can also be a good place to discuss open questions that you’d like other researchers to think about. * My model is that only the ~5 people most interested in your paper are going to actually read this section, so it’s worth somewhat tailoring to that audience. Unfortunately, the paper reviewers might also read this section, so you can’t tailor it too much or the reviewers might get upset if they end up not being in the target audience. * For theory papers, having a conclusion is completely optional (I usually skip it). In this case, open problems can go in the introduction. If you’re submitting a theory paper to NIPS or ICML, you unfortunately need a conclusion or  reviewers will get upset. In my opinion, this is an instance where peer review makes the paper worse rather than better. **LaTeX** * Proper citation style: one should write “Widgets are awesome (Smith, 2001).” or “Smith (2001) shows that widgets are awesome.” but never “(Smith, 2001) shows that widgets are awesome.” You can control this in LaTeX using \citep{} and \citet{} if you use natbib. * Display equations can take up a lot of space if over-used, but at the same time, too many in-line equations can make your document hard to read. Think carefully about which equations are worth displaying, and whether your in-line equations are becoming too dense. * If leave a blank line after \end{equation} or $$, you will create an extra line break in the document. This is sort of annoying because white-space isn’t supposed to matter in that way, but you can save a lot of space by remembering this. * DON’T use the fullpage package. I’m used to using \usepackage{fullpage} in documents to get the margins that I want, but this will override options in many style files (including jmlr.sty which is used in machine learning). * \left( and \right) can be convenient for auto-sizing parentheses, but are often overly conservative (e.g. making parentheses too big due to serifs or subscripts). It’s fine to use \left( and \right) initially, but you might want to specify explicit sizes with \big(, \Big(, \bigg(, etc. in the final pass. * When displaying a sequence of equations (e.g. with the align environment), use \stackrel{} on any non-trivial equality or inequality statements and justify these steps immediately after the equation. See the bottom of page 6 of [this](http://www.jmlr.org/proceedings/papers/v40/Steinhardt15.pdf) paper for an example. * Make sure that \label{} commands come after the \caption{} command in a figure (rather than before), otherwise your numbering will be wrong. **Math** * When using a variable that hasn’t appeared in a while, remind the reader what it is (i.e., “the sample space ![\mathcal{X}](https://s0.wp.com/latex.php?latex=%5Cmathcal%7BX%7D&bg=f0f0f0&fg=555555&s=0&c=20201002)” rather than “![\mathcal{X}](https://s0.wp.com/latex.php?latex=%5Cmathcal%7BX%7D&bg=f0f0f0&fg=555555&s=0&c=20201002)“. * If it’s one of the main points of your work, call it a Theorem. If it’s a non-trivial conclusion that requires a somewhat involved argument (but it’s not a main point of the work), call it a Proposition. If the proof is short or routine, call it a Lemma, unless it follows directly from a Theorem you just stated, in which case call it a Corollary. * As a general rule there shouldn’t be more than 3 theorems in your paper (probably not more than 1). If you think this is unreasonable, consider that my [COLT 2015 paper](http://www.jmlr.org/proceedings/papers/v40/Steinhardt15.pdf) has 3 theorems across 24 pages, and my STOC 2017 paper has 2 theorems across 47 pages (not counting stating the same theorem in multiple locations). * If you just made a mathematical argument in the text that ended up with a non-trivial conclusion, you probably want to encapsulate it in a Proposition or Theorem. (Better yet, state the theorem before the argument so that the reader knows what you’re arguing for; although this isn’t always the best ordering.)
ebd5f051-29fb-4e19-b172-0b64819a0c18
trentmkelly/LessWrong-43k
LessWrong
GiveWell: A case study in effective altruism, part 1
c8306252-b512-4483-8b90-fcdb1e000fa4
trentmkelly/LessWrong-43k
LessWrong
Time Travel Markets for Intellectual Accounting In a comment to Can we hold intellectuals to similar public standards as athletes? I said that one of the largest problems with using prediction accuracy (EG prediction markets) as the exclusive standard by which we judge intellectuals would be undervaluing contributing to the thought process. Here, I propose a modification of prediction markets which values more types of contributions. Arguments as Models In many cases, a thought/argument can be reformulated as a model. What I mean by this is a formally described way of making predictions. Adding models to prediction markets could increase their transparency; we want to create an incentive for traders to explain themselves. A model has a complexity (description length; the negative log of its prior probability). We generally have more respect for predictions which come from models, in contrast to those which don't. And, given two models, we generally have more respect for the simpler one. So, to adjust prediction markets to account for models, we would want something which: * Values making good predictions over everything else; * Values models over opaque predictions; * Values simple models more. But how can we trade off between modelling and predictive accuracy? Time Travel Markets As I've discussed before, one of the major differences between prediction markets and Bayesianism is that prediction-market traders only get credit for moving the market in the right direction, which introduces a dependence on when a prediction is made: if you make an accurate prediction at a time when everyone else has already arrived at that conclusion, you don't get any credit; whereas, if you make an accurate prediction at a time when it's unpopular, you'll get a lot of credit for that. That's a problem for model-makers. An explanation is still useful, even after all the facts needing explanation are agreed upon.  This is a generalization of the problem of old evidence. Solomonoff induction solves the problem of old
c28fc223-9384-4282-848b-6a4c84f88f19
StampyAI/alignment-research-dataset/special_docs
Other
A computational process-tracing method for measuring people’s planning strategies and how they change over time. BehaviorResearchMethods https://doi.org/10.3758/s13428-022-01789-5 Acomputationalprocess-tracingmethodformeasuringpeople’s planningstrategiesandhowtheychangeovertime YashRajJain1,2·FrederickCallaway3·ThomasL.Griffiths3·PeterDayan4·RuiqiHe1·PaulM.Krueger5· FalkLieder1 Accepted:3January2022 ©TheAuthor(s)2022 Abstract One of the most unique and impressive feats of the human mind is its ability to discover and continuously refine its own cognitive strategies. Elucidating the underlying learning and adaptation mechanisms is very difficult because changes in cognitive strategies are not directly observable. One important domain in which strategies and mechanisms are studied is planning. To enable researchers to uncover how people learn how to plan, we offer a tutorial introduction to a recently developed process-tracing paradigm along with a new computational method for measuring the nature and development of a person’s planning strategies from the resulting process-tracing data. Our method allows researchers to reveal experience- driven changes in people’s choice of individual planning operations, planning strategies, strategy types, and the relative contributions of different decision systems. We validate our method on simulated and empirical data. On simulated data, its inferences about the strategies and the relative influence of different decision systems are accurate. When evaluated on human data generated using our process-tracing paradigm, our computational method correctly detects the plasticity-enhancing effect of feedback and the effect of the structure of the environment on people’s planning strategies. Together, these methods can be used to investigate the mechanisms of cognitive plasticity and to elucidate how people acquire complex cognitive skills such as planning and problem-solving. Importantly, our methods can also be used to measure individual differences in cognitive plasticity and examine how different types (pedagogical) interventions affect the acquisition of cognitive skills. Keywords Process-tracing ·Cognitive plasticity ·Panning ·Decision-making ·Individual differences ·Learning · Computational methods Introduction A remarkable feature of the human mind is its ability to improve itself continually. As helpless babies develop into mature adults, they not only acquire impressive perceptual /envelopebackYash Raj Jain yasshjain@gmail.com 1Max Planck Institute for Intelligent Systems, T¨ubingen, Germany 2Birla Institute of Technology and Science, Pilani, Hyderabad, India 3Department of Psychology, Princeton University, Princeton, NJ, USA 4Max Planck Institute for Biological Cybernetics, T¨ubingen, Germany 5Department of Computer Science, Princeton University, Princeton, NJ, USAand sensory-motor skills and knowledge about the world. They also acquire cognitive skills such as the abilities to perform mental arithmetic, plan, and problem-solve (van Lehn, 1996 ; Shrager & Siegler, 1998 ; Lieder & Griffiths, 2017 ;H ee ta l . , 2021 ;J a i ne ta l . , 2019 ). These abilities can be understood in terms of computational procedures that people perform on their mental representations of the external environment. Such computational procedures are known as cognitive strategies . Here, we focus on cognitive strategies for planning and refer to them as planning strategies . There are many different types of planning strategies that people can use. And as a person gains more experience they might switch from a less effective strategy to a more effective one. For instance, the first time a person plans a road trip they might start by thinking about which nearby location they might visit first, mentally simulating how good it would be to visit that location, then think about where they might go next, mentally simulating what it would be like to be there, and so on. By the time that BehaviorResearchMethods this person plans their tenth road trip, she might start by mentally simulating especially attractive distant locationsthat the road should be designed to lead to. These two examples illustrate that people’s planning strategies draw on a shared set of elementary planning operations that mentally simulate states and actions but differ in what planning operation they perform under which conditions. Developmental and learning-induced changes in how people think and decide are collectively known as cognitive plasticity . Just like the acquisition of perceptual skills (Hubel & Wiesel, 1970 ), the acquisition of cognitive skills requires specific experiences and practice (van Lehn, 1996 ; Ericsson et al., 1993 ). Despite initial research on how people acquire cognitive skills such as the abilities to perform mental arithmetic, plan, and problem-solve (van Lehn,1996 ; Shrager & Siegler, 1998 ; Lieder & Griffiths, 2017 ; He et al., 2021 ;J a i ne ta l . , 2019 ), the underlying learning mechanisms are still largely unknown. Reverse-engineeringhow people discover effective cognitive strategies is very challenging. This is chiefly because it is impossible to observe directly people’s cognitive strategies or howpeople’s strategies and strategy choices change withexperience – let alone the underlying learning mechanisms. Instead, cognitive plasticity has to be inferred from observable changes in behavior. This is difficult becauseany observed behavior could have been generated by many different cognitive mechanisms. This problem is pertinent to all areas of cognition. We assume that each planning strategy performs a sequence of internal information gathering operations (Callaway et al., 2022b ). Concretely, we assume that each of these planning operations mentally simulates what might happenif one took a particular action in a particular situation. We assume that the outcome of each simulation is the reward that the person expects the action to generate. Furthermore,we treat the mental simulation of each state-action pair as a separate planning operation. These assumptions make it possible to measure planning by externalizing the process ofinformation gathering that would otherwise occur through memory recall and mental simulation (Callaway et al., 2017 ; Callaway et al., 2018 ; Callaway et al., 2022b ). Building on this theory and a previous method for studying how peoplechoose between alternatives with multiple attributes (Payne et al., 1993 ), we introduce a process-tracing paradigm for revealing the sequence of information gathering operationspeople perform during planning (see Fig. 1) and a com- putational method for inferring the underlying planning strategies (see Fig. 2). We will refer to these methods as theMouselab MDP paradigm and our computational micro- scope . Our process-tracing method renders people’s behavior in a route planning task highly diagnostic of their planningstrategies by Fig.1 Illustration of the Mouselab-MDP paradigm. This figure shows a three-step planning task that can be created within the Mouselab- MDP paradigm. Here, the participant has to choose a series of threemoves. Starting from the central location, the first decision is whetherto move left, up, or right (Step 1); in each case there is only one optionfor the second move (Step 2), and then the spider can turn either leftor right in the third step. Rewards are revealed by clicking, prior toselecting a path with the arrow keys. At each node each of the fourpossible rewards is equally likely to occur requiring them to click on locations they consider visiting to find out how costly or rewarding it would be to do so (see Figure 1). That is, when a person clicks on the state that they would get to by taking a certain action in a certainstate, we treat it as an indication that they just performed the corresponding planning operation. The Mouselab-MDP paradigm poses people a series of planning problems (onein each trial). For each trial, it records the sequence of clicks (planning operations) that the participant performed, which information each click revealed, and the plan thatthe participant selected based on the resulting information(see Fig. 3). As Fig. 3illustrates, this makes it possible to observe how the type of planning operations a person performs and the order in which she performs them changefrom each trial to the next. Our computational microscope uses the resulting process-tracing data to perform model- based inference on the trial-by-trial sequence of planningstrategies the participant used to make his or her decisions. Together, these two methods allow researchers to specify a planning task and directly measure how people’s planningstrategies change from one trial to the next (see Fig. 2). To facilitate adoption of the toolbox, we provide JavaScript and Python libraries for both components and a tutorial on how to use them. We hope that this toolbox will help researchersmeasure how people’s planning strategies change depending on their experience. People changing their planning strategies in response to how well they worked is a prime example of what we BehaviorResearchMethods Fig. 2 Illustration of the basic idea of measuring people’s planning strategies. The Mouselab MDP paradigm is a process-tracing method that utilizes mouse tracking to measure which pieces of information people inspect during planning and in which order they inspect them.The computational microscope is a model-based inference method that determines which of 79 different planning strategies the participant is m o s tl i k e l yt oh a v eu s e do nag i v e nt r i a l call metacognitive reinforcement learning (Krueger et al., 2017 ; Lieder & Griffiths, 2017 ; Lieder et al., 2018c ;J a i n et al., 2019 ;H ee ta l . , 2021 ). Metacognitive reinforcement learning is set of mechanisms through which people learn when to perform which cognitive operations through trial and error. These mechanisms might play an important rolein how people discover new cognitive strategies, adapt their strategies to the structure of their environment, and acquire cognitive skills (Lieder & Griffiths, 2017 ; Krueger et al., 2017 ;J a i ne ta l . , 2019 ;H ee ta l . , 2021 ). Metacognitive learning is difficult to study because its effects and mechanisms cannot be observed directly. Thr ough- out this article we will present a series of case studies toillustrate that our new computational method is useful for characterizing how people learn how to plan and elucidating metacognitive reinforcement learning more generally. The plan for this paper is as follows: First, we summarize and illustrate the functionality offered by our toolbox for measuring how people learn how to plan and explain howit works. Next, we provide a practical step-by-step user’sguide on how to apply it. We then demonstrate the reliability and validity of the inferences of our computational micro- scope. In closing, we discuss directions for future work enabled by the methodology introduced in this article. Newmethodsformeasuringhowpeople learnhowtoplan Planning, like all cognitive processes, cannot be observed directly but has to be inferred from observable behavior.This is generally an ill-posed problem. In previous work, researchers have inferred properties of human planning from the decisions participants ultimately made or askedparticipants to verbalize their planning process. However, many different planning strategies can lead to the same final decision, and introspective reports can be incomplete or inac-curate. In the 1970s researchers studying how people choose between multiple alternatives (e.g., apartments) based on several attributes (e.g., rent, size, location, etc.) faced asimilar problem (Payne, 1976 ). To overcome this problem, Johnson et al. ( 1989 ) developed a process-tracing paradigm that elicits and records behavioral signatures of people’s decision strategies. Concretely, in the Mouselab paradigm(Payne et al., 1993 ), the alternatives’ attribute values are ini- tially concealed and the participant can make clicks with their computer mouse to reveal one attribute value at a time.The Mouselab paradigm allows researchers to trace peo- ple’s decision strategies by recording which attributes of which alternatives people inspect in which order (Payneet al., 1993 ). While these behavioral signatures are still indirect measures of cognitive processes, and the means of observation might disturb the normal processes of decision- making, they do at least provide additional informationabout potential underlying decision strategies. The Mouselab paradigm has enabled an extremely pro- ductive stream of research on the processes of multi-attributedecision-making (Payne et al., 1988 ;F o r de ta l . , 1989 ; Payne et al., 1993 ; Schulte-Mecklenbeck et al., 2011 ; Schulte-Mecklenbeck et al., 2019 ). Here, we introduce two new methods that extend the process-tracing methodol-ogy from the domain of multi-attribute decision-making to the domain of planning. We start by describing a new BehaviorResearchMethods Fig. 3 Illustration of the process-tracing data that can be collected with the Mouselab-MDP paradigm. The recorded interactions (clicksand moves) the participant made and the information the participantobserved are enumerated in the order in which they occurred. In thisexample, the first participant started out with a short-sighted planningstrategy and gradually discovered a more far-sighted one. On the firsttrial she made two clicks on immediate outcomes on their first trial andthen selected a path. In the last trial the first participant inspected three final outcomes. The process-tracing data from the intermediate trials documents the participant’s transition between these two very differentways of planning process-tracing paradigm for measuring individual plan- ning operations (Section 5). Measuring planning operations can yield valuable insights into how people plan (Callaway et al., 2017 ; Callaway et al., 2022b ). But most research ques- tions, such as how human planning compares to planning algorithms used in artificial intelligence, are not formulatedat the level of individual planning operations but instead at the level of planning strategies . Analyzing the data collected with our process-tracing paradigm suggested that people use a wide range of different planning strategies. We found that which strategypeople use does not only depend on the structure of the environment (Callaway et al., 2018 ; Callaway et al., 2022b ) but also on the participant’s learning history and individualdifferences. Concretely, we found that people may use as many as 79 different planning strategies across different environments and different points in time. These strategiesprioritize different types of information, such immediateoutcomes versus long-term consequences, highly uncertain outcomes, or outcomes following gains rather than losses, and they also differ in when they stop collecting moreinformation (e.g., upon uncovering a path yielding a reward of at least $48). The resulting set of strategies includes variants of classic planning algorithms, such as breadth-firstsearch, depth-first search, and best-first search, as well as several novel strategies, such as first identifying the best possible final outcome and then planning backward from it.The 79 planning strategies can be grouped into 13 differenttypes, including goal-setting strategies with exhaustive backward planning, forward-planning strategies similar to breadth-first search, and forward planning strategies similarto best-first search (see Section 5for a list of all strategies grouped by strategy type). To make it possible for researchers to measure which strategies were used, we developed a computational method that leverages each participant’s process-tracing data to infer which strategy he or she used on the first trial, thesecond trial, the third trial, etc. We introduce this method inSection 5. The basic idea is to invert a probabilistic model of how the participant’s process-tracing data was generated by a series of planning strategies through Bayesian inference.This is a challenging methodological problem because people rarely execute any given strategy perfectly. We solve this problem by explicitly modeling the variability in thestrategy that people use, in their execution of the strategy, and in the way the execution of the strategy manifests in their process-tracing data. In addition, we also modelthat there might be trials on which people don’t use anyparticular strategy or a strategy that is still unknown. Our computational microscope can be applied to reveal people’s planning strategies in a wide range of differenttask environments. Used in combination, our two methods can be used to characterize the cognitive mechanisms of human planning, investigate how a person’s planningstrategies evolve across trials, and uncover how planning strategies are affected by contextual factors and differ between individuals. Our methods support this researchby providing trial-by-trial measurements of four aspectsof human planning: the series of planning operations they BehaviorResearchMethods performed, which of the 79 different planning strategies was the most likely source of those planning operations,which type of strategy it was, and how different types of previously postulated mechanisms (e.g., habits vs. Pavlovian mechanisms vs. reasoning) might have shaped aperson’s planning on a given trial. Figure 4summarizes the information that our computa- tional microscope provides the user about how a given par-ticipant planned in a given Mouselab-MDP experiment. The following sections illustrate each of these functionalities in turn. In this section we give a brief high-level overview of the functionality offered by our methods. The technical details are presented in the following section. Measuringindividualplanningoperations withtheMouselab-MDPparadigm To make individual planning operations measurable, we devel- oped a process-tracing paradigm that externalizes people’sbeliefs and planning operations as observable states and actions (Callaway et al., 2017 ). We refer to this paradigm as the Mouselab-MDP paradigm because it extends theapproach of the Mouselab paradigm (Payne et al., 1993 )t o a general class of planning tasks known as Markov Deci- sion Processes (MDPs) (Sutton & Barto, 2018 ). A Markov Decision Process comprises a series of decisions. Given the current state (e.g., location) the agent has to choose an action that, together with the current state, determines bothan immediate reward and the next state. The task is to max-imize the sum of all rewards over time. Inspired by the Mouselab paradigm (Payne et al., 1993 ), the Mouselab-MDP paradigm uses people’s mouse-clicking as a window intotheir planning. As illustrated in Fig. 1, this paradigm presents participants with a series of route planning prob-lems. Each route planning problem is presented as a map where each location (the gray circles), harbors a gain or loss. These potential gains and losses are initially occluded,corresponding to a highly uncertain belief state. The par- ticipant can (expensively) reveal each location’s reward by clicking on it and paying a fee. This is similar to looking ata map to plan a road trip. Clicking on a circle corresponds to thinking about a potential destination, evaluating how enjoyable it would be to go there, or perhaps how costly itwould be to go through there on the way to somewhere else,and then adjusting one’s assessment of candidate routes accordingly. The set of revealed rewards constitutes the state of the participant’s knowledge which we will refer to asthebelief state . The tasks in this paradigm are designed such that each planning operation requires the participant to make a specific click and each click is the output of a spe-cific planning operation. Participants can make as few or as many clicks as they like. After that the participant has to select a route through the environment using the arrowkeys. For each location they visit, the corresponding rewardis added to their score. The task is to maximize the money earned by traversing the environment minus the fees paid for collecting information. The Mouselab-MDP paradigm can be used to create a wide range of environments that vary in size, layout (structure), and reward distribution. Figures 1,7a-c, and 9 illustrate the variety of task environments that can be created with this paradigm. Several of the illustrative examples below and the experiments used to validate our methodsare based on the simple three-step planning task shown inFig. 1. Here, the participant can earn money by navigating Fig. 4 Illustration of the hierarchically nested information that our method provides about a participants planning throughout the ntrials of a Mouselab-MDP experiment. The participant’s learning trajec-tory is characterized by the sequence of planning strategies that the participant used on trial 1, trial 2, ···,t r i a l n, respectively. The strategy the participant used on a given trial is characterizedby a procedural description, the general type of planning strategyit instantiates, the sequences of clicks it performed on that trial, the plan that they selected on that trial, and how the influencesof different decision systems and other factors combine to gener-ate that strategy. Each click sequence comprises a series of clicks. Each click is characterized by where the participant clicked and which information (reward) their click unveiled. Timing data is alsoavailable BehaviorResearchMethods a money-loving spider through a “web of cash”. There are six possible paths the participant can choose between. Eachpath comprises three steps, starts from the gray node in the center of the web, and proceeds along the arrows. In the first step, the spider can go left, up, or right. In thesecond step, it has to continue in that direction. In the third step, it can choose to either turn left or right. Each node that the spider might visit along the chosen path harbors again of up to $48 or loss of up to $-48. The player earns a monetary bonus proportional to the sum of the three rewards along the chosen path minus the fees they paid for clicking.In the beginning all gains and losses are concealed. Theparticipant can uncover them for a fee of $1 per click. The participant can make as many or as few clicks as they like. Once they are done collecting information (planning),they start acting by moving the spider with the arrow keys. The participant receives the gain or loss at a given location if and only if they move the spider there. Clicking on anode only reveals the information which gain or loss they would receive if they moved to the inspected location but does not collect that reward. Furthermore, whether or nota node has been inspected has no effect on the reward theparticipant receives when the spider enters that location. Critically, in this particular three-step planning task, the variance of the potential rewards is smallest for the nodesthat can be reached within one step, larger for the nodes that can be reached within two steps, and largest for the potential final destinations that are three steps away fromthe spider’s starting position at the center of the web (see Figure 1). This captures a common feature of real-world planning problems, namely that long-term outcomes aremore important than short-term rewards. The Mouselab-MDP paradigm can be used to elicit infor- mation about people’s planning operations at a level of detail which was inaccessible with previous behavioralparadigms. It makes it possible to measure which informa- tion people’s planning strategies consider in which order and how this depends on the information revealed by pre-vious planning operations. Figure 3illustrates the kind of process-tracing data that can be obtained with the Mouselab-MDP paradigm. The data from any given trialtraces the strategy that an individual participant used toreach their decision on that trial. Taken together, the data from a series of trials traces how the participant’s decision strategy changed along with the observations and experi-enced rewards that preceded each change. Concretely, the example illustrated in Fig. 3what the data might look for a participant who starts out with a myopic planningstrategy and gradually discovers the optimal far-sighted goal-setting strategy.Acomputationalmicroscopeforinferringpeople’s planningstrategies The fine-grained information about the planning operations obtained from the Mouselab-MDP paradigm can be usedto draw much richer inferences about how people plan andhow the way they plan changes over time. However, the raw click sequences are difficult to analyze directly without sophisticated and typically theory-laden modeling tools.The computational microscope is a computational method that makes it possible to characterize how the participants of your experiment planned at the level of planning strategies,strategy types, and the contributions of different decision systems and other factors. In this section, we first give an overview of the computational microscope’s functionality.We then give a detailed account of how this functionality isimplemented and close with an illustrative example of how the computational microscope can be used. Overviewofthecomputationalmicroscope’sfunctionality The computational microscope makes use of the informa- tion about people’s planning operations collected with theMouselab-MDP process-tracing paradigm to help us better understand how people plan and how their planning changes over time. It makes it possible to infer which of 79 knownplanning strategies a participant used on a given trial from their clicks in the Mouselab-MDP paradigm. The set of 79 planning strategies includes the strategy that does not planat all, a strategy that only inspects the immediate rewards,a strategy that inspects only the potential final outcomes and terminates planning once it discovers a large positive value, a variant of this strategy that plans backward fromthe preferred final outcome, search-based planning strate- gies (Russell & Norvig, 2016 ), such as breadth-first search (i.e. first explore nodes that are one step away, then explorenodes at are two steps away, and so on) and best-first search (i.e., explore nodes in decreasing order of the values of the paths they lie on), a strategy that explores all final nodesthat are farthest away from the start node, and many others.For the hypothetical data set illustrated in Fig. 3, our com- putational microscope would likely infer that the participant started with the myopic planning strategy that terminatesupon uncovering a positive value (Strategy 53 described in Section 5) and eventually discover the optimal goal-setting strategy (Strategy 6 described in Section 5). In addition to fine-grained information about concrete planning strategies, the computational microscope also pro- vides high-level information about which kind of planningstrategy the person is using. Concretely, the microscope BehaviorResearchMethods distinguishes between 13 types of planning strategies: four types of goal-setting strategies that explore potential finaloutcomes first, a strategy that explores immediate outcomes on the paths to the best final outcomes, a satisficing version of that strategy, forward-planning strategies (i.e strategiesthat start planning from nodes that are one step away from the start node) similar to Breadth First Search, middle- out planning (i.e the strategies that click the nodes in themiddle of a path, then click the nodes that are nearest to the start node and then click nodes that are the far- thest away), forward-planning strategies similar to BestFirst Search, local search strategies that focus on informa-tion about subtrees and next or previous steps along the paths that have received the most consideration so far, fru- gal planning strategies (i.e strategies that explore very littleor not at all), myopic planning strategies (i.e. strategies that only explore nodes that are one step away from the start node) and a few other strategies that do not fit anyof these categories. The four types of goal-setting strate- gies differ in how many potential goals they inspect (all vs. some), in how many and which earlier outcomes theyinspect (all vs. some), and in when and how often theytransition between inspecting goals versus earlier outcomes. For instance, goal-setting with exhaustive backward plan- ning inspects all potential goals and all earlier outcomes.By contrast, frugal goal-setting strategies only explore some of the potential goals and none or only a small number of the earlier outcomes. Maximizing goal-setting with limitedbackward planning first identifies an optimal final outcome and then either terminates planning or inspects only the nodes on the path leading to the best final outcome. Bycontrast, maximizing goal-setting with exhaustive backwardplanning inspects the paths to all potential goals in the order of the goals’ rewards after having inspected all potential goals. For the hypothetical data set illustrated in Fig. 3our computational microscope would likely infer that the partic- ipant started with a frugal planning strategy and eventuallydiscovered a maximizing goal-setting strategy with limited backward planning. The definitions of these strategy types are presented in Section 5. The computational microscope’s functionality is realized through model-based probabilistic inference. The model comprises three components: probabilistic models of 79 planning strategies, a probabilistic model of how planningstrategies generate click sequences ( observation model ) and a probabilistic model of the sequence of planning strategies ( prior on strategy sequences ). As shown in Fig. 5, our method assumes that which planning strategy ( S t)a participant uses can change from each trial ( t) to the next but remains constant within each individual trial. In otherwords, we assume that exactly one planning strategy isused in each trial and that this strategy may be differentfrom the one that was used in the previous trial and the one that will be used in the following trial. Furthermore,our method assumes that the strategies themselves do not change. Therefore, the computational microscope infers the trial-by-trial sequence of planning strategies that theparticipant used in the experiment (i.e., which strategy her or she used in the first trial of the experiment, which potentially different strategy he or she used in the secondtrial of the experiment, etc.). This sequence of planning strategies is inferred from the corresponding sequence of trial-by-trial click sequences (i.e., one click sequence foreach trial). The basic idea is to find the sequence of planningstrategies that is most likely to have generated the observed sequence of click sequences. The trial-by-trial changes in the relative influences of different decision systems andother factors can then be read off from the inferred strategy sequence because we make the simplifying assumption that way in which those factors interact to generate thebehavior of a given strategy does not change over time. The computational microscope requires access to a set of planning strategies which generate the planning operationsin a trial and models transitions among these strategiesusing a prior. We first describe how we formally model the planning strategies. We then describe the generative model of clicks (planning operations) given a strategy and thendiscuss how the computational microscope performs model inversion by taking into consideration information about participants’ clicks obtained from the Mouselab-MDP andthe prior on strategy sequences to make inferences about the most likely sequence of strategies that might have generated the data. Obtaining the most likely sequence of strategiesalso gives us information about the strategy types and thetemporal evolution of relative influence of decision systems (see Section 5). Modelingplanningstrategies To make it possible to extract interpretable strategies from the raw click sequences, we formulated a set of 79 planningstrategies ( S) through a data driven methodology. Con- cretely, we manually inspecting the process-tracing data from an experiment in which participants completed 31 tri-als of the 3-step planning task illustrated in Fig. 1(for description, see Appendix A.1). We visually inspected this data one click sequence at a time. Each time, we checkedwhether the current click sequence could be an instance ofan already identified strategy. When this was not the case, we manually added an additional strategy to account for this new pattern. We then proceeded to the next click sequenceand repeated the same procedure. If there was no appar- ent pattern, we identified it as an instance of a strategy that clicks randomly. We continued this process until our strate-gies were able to account for all click sequences of every BehaviorResearchMethods Fig. 5 Overview of the computational microscope describing the Hidden Markov model that generates the observed process-tracing data as a graphical model participant who participated in the experiment described in Appendix A.1. We modelled each of these planning strategies as a stochastic procedure that generates a sequence of planning operations (clicks). That is, a planning strategy specifiesa probability distribution over what the first click might be and conditional probability distributions over what each subsequent click might be depending on which clickswere made previously and which rewards they revealed.For instance, the best-first search strategy distributes the probability of the first click evenly among the immediate outcomes and concentrates the probability of subsequentclicks on proximal outcomes that follow the best immediate reward(s). Furthermore, the planning strategy also specifies the conditional probability to terminate planning and selectan action based on the information that has been revealed so far. For instance, for many of our planning strategies, the probability of terminating planning increases with the sumof the rewards of the best path that has been identified sofar. As detailed in the next section, each planning strategy (s) entails a probability distribution ( P) over which process tracing data ( d) might be observed if a participant used that strategy ( P(d|s)). Different strategies differ in which planning operations they perform first, in how they use the revealed information to select the subsequent planningoperations, and in when they terminate planning. We model each sequence of planning operations a participant performed from the beginning of a trial to the end of thattrial as the manifestation of a single strategy. 1 According to our model, all strategies are probabilistic in the sense that they randomly select between all functionally equivalent planning operations that are consistent with whatthe strategy does in the current step. For instance, when the 1Those strategies may be composed of building blocks that are reused in multiple strategies but we do not explicitly model this hierarchicalstructure.first step of a strategy is to inspect immediate outcomes until it uncovers a positive value, then our model assumesthat the strategy chooses uniformly at random between all planning operations that inspect an uninspected immediate outcome. For more details about the strategies, please seeAppendix A.4. We found that, collectively, the 79 planning strategies can capture people’s click sequences much better than the ran-dom strategy. Concretely, we found that, on average, eachclick made by a participant is 3 to 6 times as likely under the best fitting strategy than under the random strategy. That is, for the environment with increasing variance, the maximumlikelihood estimate of people’s strategies achieve an aver- age click likelihood of 0.38 whereas the random strategy achieves an average click likelihood of only 0.10. For theenvironment with constant variance (Fig. 7b), the average per click likelihood is 0.50 whereas it is 0.09 for the ran- dom strategy. For the environment with decreasing variance(Fig. 7a), the average per click likelihood is 0.37 whereas it is 0.08 for the random strategy. And finally, for the environ- ment used in the transfer task (Fig. 7c), the average per click likelihood is 0.19 whereas it is 0.03 for the random strategy. Modelinghowstrategysequencesgenerate process-tracingdata To develop an efficient computational method for infer- ring the temporal evolution of people’s planning strategies, we make the simplifying assumption that the trial-by-trial sequence of peoples’ cognitive strategies ( S 1,S2,···,ST) forms a Markov chain whose hidden states emit the observed process tracing data collected on each trial ( d1,···,dT). This hidden Markov model requires additional methodolog-ical assumptions about i) how cognitive strategies manifest in process-tracing data, ii) the set of cognitive mechanisms that can be learned (defined in Section 5), and iii) the nature and amount of cognitive plasticity that might occur. BehaviorResearchMethods The following paragraphs detail our assumptions about the components i) and iii) in turn. Observationmodel To plan in the Mouselab-MDP paradigm participants have to gather information by making a sequence of clicks. Our observation model thus specifies the proba- bility of observing a sequence of clicks dton trial tif the strategy was St(i.e.,P(dt|St)). To achieve this, we quantify each planning strategy’s propensity to generate a click c(or stop collecting informa- tion) given the already observed rewards encoded in beliefstatebby a weighted sum of 51 features ( f 1(b, c), ···, f51(b, c) ). The features describe the click crelative to this information (e.g., by the value of the largest reward that canbe collected from the inspected location) and in terms of the action it gathers information about (e.g., whether it pertains to the first, second, or third step). A detailed description ofthe features and strategies is available in Appendix A.6. The depth feature, for instance, describes each click by how many steps into the future it looks. The features andweights jointly determine the strategy’s propensity to makeclickcin belief state baccording to P(d t|St)=|dt|/productdisplay i=1exp/parenleftBig 1 τ/summationtext|w(S)| k=1w(S) kf(S) k(ct,i,bt,i)/parenrightBig /summationtext c∈Cbtexp/parenleftBig 1 τ/summationtext|w(S)| k=1w(S) kf(S) k(c, bt,i)/parenrightBig, (1) where dt,iis the ithclick the participant made on trial t(or the decision to stop clicking and take action), the decisiontemperature τwas considered as a hyperparameter which was set by the inference procedure, and w (S)is the weight vector of strategy S. According to this probabilistic soft- max model, all clicks are possible under each strategy in each situation but their probability is higher the better they are aligned with the strategy. The strategies differ in how much information they con- sider (ranging from none to all to exploring all the nodes), which information they focus on, and in the order in which they collect it. Building on the observation modelin Eq. 1, we represent each strategy by a weight vector w=(w 1,···,w51)that specifies the strategy’s preference for features such as more vs. less planning, exploring nodeswith more uncertainty vs. less, considering immediate vs. long-term consequences, satisficing vs. maximizing, avoid- ing losses (cf. Huys et al., 2012 ), exploring paths that have a larger number of explored nodes, exploring nodes thatare related to already observed nodes such as the ancestor nodes, successor nodes and siblings, and other desiderata. These weights are computed by generating data by simulat-ing which clicks each strategy would make and then fitting the weights in Eq. 1using Maximum Likelihood Estimation (MLE). These weights span a high-dimensional continuousspace with many intermediate strategies and mixtures of strategies. Cognitive plasticity could be measured by track-ing how those weights change over time. But this would be a very difficult ill-defined inference problem whose solution would depend on our somewhat arbitrary choice of features.As a first approximation, our method therefore simplifies the problem of measuring cognitive plasticity to inferring a time-series of discrete strategies. A detailed description ofthe features used in the observation model can be found in Appendix 5 Prioronstrategysequences Inferring a strategy from a sin- gle click sequence could be unreliable. To smooth out its inferences, our method therefore exploits temporal depen-dencies between subsequent strategies by using a probabilis- tic model of strategy sequences. Transitions from one strategy to the next can be grouped into three types: repetitions, gradual changes, and abruptchanges. While most neuroscientific and reinforcement- learning perspectives emphasize gradual learning (e.g., Hebb, 1949 ; Mercado, 2008 ; Lieder et al., 2018c ), others suggest that animals change their strategy abruptly when they detect a change in the environment (Gershman et al., 2010 ). Symbolic models and stage theories of cognitive development also assume abrupt changes (e.g., Piaget, 1971 ; Shrager & Siegler, 1998 ), and it seems plausible that both types of mechanisms might coexist. We considered three kinds of priors on the strategy tran- sitions: gradual, abrupt and a combination of gradual and abrupt transitions. We did not find any significant relation- ship between the probability of transition from one strategyto the next and the distance between the strategies (see Appendix A.2.1 ). We found that the frequency of a transi- tion from a strategy to itself was more likely than a transitionfrom a strategy to some other strategy ( t(975)=7.55, p< 0.0001 ,BF>1000). Model selection using either AIC (Akaike, 1974 ) or BIC (Schwarz et al. 1978 )v a l u e s computed using the likelihood values of the maximum like-lihood estimate of the strategy sequence also revealed the abrupt prior to be the best performing. Therefore, we use the abrupt prior for all our inferences. The gradual and themixed priors are described in Section 5. The abrupt changes prior assumes that transitions are either repetitions or jumps. P(S t+1=s|St,mabrupt)= pstayI(St+1=St)+(1−pstay)I(s/negationslash=St) |S|−1, (2) whereSis the set of strategies, |S|is the number of strategies and pstayis the probability of strategy repetitions. We model the probability of the first strategy as a uniform distribution over the space of decision strategies (i.e.,P(S 1)=1 |S|). BehaviorResearchMethods Together with the observation model and the strategy space described above, the prior defines a generative modelof a participant’s process tracing data d; this model has the following form: P(d,S 1,···,ST)=1 |S|T/productdisplay t=2P(St|St−1|mabrupt)P(dt|St).( 3 ) Inverting this model gives rise to a computational method for measuring an important aspect of cognitive plasticity. Inferringstrategysequencebymodelinversion Our model describes how the sequences of planning strategies a participant uses across the different trials ofthe experiment manifests in their process-tracing data. To measure this sequence of planning strategies, we have to reason backwards from the process tracing data dto the unobservable cognitive strategies S 1,···,STthat generated it. To achieve this, we first model the generation of process- tracing data using a Hidden Markov Model with the 79planning strategies as the possible values of its latent states and the prior m+ abrupt as its transition prior. Having modelled how likely alternative strategies are to generatea given sequence of clicks, we can apply Bayes theoremto compute how likely a person is to have used different planning strategies given the clicks that they have made. More concretely, the computational microscope computesthe sequence of strategies s 1,s2,···,sTthat is most likely to have given rise to the process-tracing data observed on the corresponding Ttrials ( d1,d2,···,dT). This is achieved by applying the Viterbi algorithm (Forney, 1973 ) to compute the maximum a posteriori (MAP) estimate arg max s1,s2,···,sTP(s 1,s2,···,sT|d1,d2,···,dT)of the hidden sequence of planning strategies S1,···,STgiven the observed process tracing data d, the measurement model mabrupt , and the parameter ( pstayof Eq. 2and the strategy temperature parameter τof the observation model. This inference combines the likelihood that a possible strategy would generate an observed click sequence with how probable potential sequences of planning strategiesare a priori. The prior probability of strategy sequences is assigned based on the knowledge that people are often somewhat more likely to repeat the strategy they used on theprevious trial than to switch an arbitrary other strategy. To estimate the model parameter p staywe perform grid search with a resolution of 0.02 over pstay∈[0,1].T h e value of τis set using 50 iterations of Bayesian Opti- mization, with the likelihood of MAP estimate of the click sequence as the objective it maximizes. We use the Tree- structured Parzen estimator approach to Bayesian Optimiza-tion implemented in the hyperopt Python package (Bergstra et al., 2013 ) for optimizing the parameter τ.Inferring the hidden sequence of cognitive strategies in this way lets us see otherwise unobservable aspects ofcognitive plasticity through the lens of a computational microscope. Inferenceonstrategytypesandmeta-control To understand what types of strategies people use, we grouped our 79 strategies using hierarchical clustering onthe distances between the strategies. Since the strategies are probabilistic, we defined the distance metric /Delta1(s 1,s2) between strategy s1ands2as the Symmetrised Kullback- Leibler divergence between the distributions of click sequences and belief states induced by strategies s1ands2respectively, that is /Delta1(s 1,s2)=JD[p(d|s1), p( d|s2)] =KL[p(d|s1), p( d|s2)] +KL[p(d|s2), p( d|s1)], (4) and approximated it using Monte-Carlo integration. Applying Ward’s hierarchical clustering method (Ward, 1963 ) to the resulting distances suggested 13 types of planning strategies described in Section 5. As discussed in Section 5, we assume that people’s choice of planning operations is shaped by the interactions of multiple decision systems and other factors. To measure the contribution of each factor in a strategy, we first assignedeach feature to one of the decision systems. Then, for each decision system, we added the weights of the features which belonged to that decision system if the feature representedan increase in that decision system and subtracted it if it represented a decrease in that decision system to give us a weight w dsfor a decision system. The relative influence of the decision system on a strategy is measured by: RIds=|wds|/summationtext ds∈D|wds|, (5) where Dis the set of all decision systems. Anexampleofapplyingthecomputationalmicroscope To illustrate the functionality of our computational micro- scope, we applied it to data from an experiment evaluatingintelligent tutors that teach people effective planning strate- gies (i.e., the experiment described in Appendix A.1). In this experiment participants practiced planning in the three-stepdecision task illustrated in Fig. 1(see Section 5) for 10 trials (training block) and were then tested on 20 more trials of the same task (test block). Participants in the experimental con-ditions received two different types of feedback during thetraining block. Participants in the control condition received no feedback. BehaviorResearchMethods Table 1 Summary of the planning strategies that people used most frequently in the environment illustrated in Fig. 1 Strategy type Strategy Used on % of trialsUsed by % of peoplePeople who used this strategy (type) used it on %o ft r i a l s Maximizing goal-setting withlimited backward planning50.4% 68.8% 69.7% Random search for best possible final outcome36.6% 55.1% 63.1% Consecutive second maximum 6.3% 17.6% 33.8% Extra planning after observing second best outcome2.2% 10.2% 20.6% Frugal planning 14.8% 34.1% 41.2% No planning 13.2% 26.1% 47.8% Miscellaneous strategies 11.3% 43.8% 24.6% Explore immediate outcomes of second best outcomes2.2% 11.4% 18.0% Local Search 7.4% 27.3% 25.8% Satisficing Depth First Search 3.6% 14.2% 24.3% Priority to explored immediate ancestors2.1% 8.0% 25.0% Myopic planning 6.5% 29.0% 21.2% Explore all immediate outcomes with satisficing2.1% 9.7% 20.4% Explore all immediate outcomes 2.1% 9.1% 22.3% Frugal goal-setting 6.1% 29.0% 20.0% Goal-setting with positive satisficing 2.1% 8.0% 24.5% Table 1lists all strategies that people used on at least 2% of the trials ordered by strategy type and frequency. As can be seen, the most common strategy types weremaximizing goal-setting with limited backward planning, frugal planning, local search, myopic planning, frugal goal- setting, and other miscellaneous strategies that don’t belongto any other strategy type. These 6 types of strategies jointly accounted for 96.5% of all strategies that people used in this environment. For more information about these strategytypes and the corresponding planning strategies, please seeAppendix A.4. Measuringtherelativecontributionsofdifferent decisionsystemsandotherfactors How people plan is shaped by the interaction of multiple different types of mechanisms throughout the decision-making process (van der Meer et al., 2012 ; Huys et al., 2012 ,2015 ; Dolan & Dayan, 2013 ; Cushman & Morris, 2015 ; Keramati et al., 2016 ;D a w , 2018 ). In most real-life decisions it is infeasible or unwise to consider all possiblesequences of actions, states, and outcomes. To decide which alternatives to consider and which ones to ignore, the model-based system relies on the recommendationsof simpler mechanisms such as Pavlovian impulses (Huyset al., 2012 ), value estimates learned through model-free reinforcement learning (Cushman & Morris, 2015 ), and simple heuristics (Huys et al., 2015 ). Furthermore, previous findings indicate the existence of an additional decision system that is specialized for deciding between continuing to gather information (e.g., by foraging) versus acting onthe information that is already available (Rushworth et al., 2012 ). Since deciding how to plan is like foraging for information, the decision when to stop planning might alsobe made separately from the decision how to plan. Thisdecision can be made by determining whether the best plan identified so far is already good enough (satisficing) or other stopping criteria. In addition, people are also knownto engage in metareasoning (Ackerman & Thompson, 2017 ; Griffiths et al., 2019 ) – that is reasoning about reasoning – to figure out what is the best way to figure out what to do.Furthermore, all else being equal, the way in which people decide seems to follow the law of least mental effort (Patzelt et al., 2019 ; Balle, 2002 ; Kool et al., 2010 ), that is people seek to avoid mental effort. We assume that all of these factors simultaneously influence how a person selects his or her individual planning operations while making a single decision (Keramatiet al., 2016 ; Huys et al., 2012 ,2015 ;D a w , 2018 ). To measure the relative contributions of these different types BehaviorResearchMethods of factors to each of the 79 planning strategies, we divided the features whose weights determine the strategies’preferences for alternative planning operations into five categories: Pavlovian ,model-free values and heuristics , model-based metareasoning ,mental effort avoidance ,a n d satisficing and stopping criteria . The Pavlovian features report how attractive or repelling it is to think about a state based on the rewards and lossesthat precede or follow it. The category model-free values and heuristics includes structural and relational features of state-action pairs that people might come to associatewith rewarded versus unrewarded planning operations. Thefeatures in the category model-based metareasoning are derived from a model of how alternative planning operations reduce the decision maker’s uncertainty about which plan isbest. The category mental-effort avoidance includes a single feature that distinguishes between performing a planning operation (more mental effort) versus acting without furtherplanning (less mental effort). The features in the category satisficing and stopping criteria describe conditions under which specific stopping rules would terminate planning,such as whether there is a path whose expected returnexceeds $48 which is an instance of satisficing (Simon, 1955 ). For a detailed definition of these categories in terms of the constituent features see Appendix A.6. To measure the relative influence of these five types of factors on how ap e r s o np l a n n e do nag i v e nt r i a l ,w ef i r s ts u mu pt h e weights that the inferred strategy assigns to features of thistype to get a total weight for the type and then normalize its absolute value by the sum of absolute values of total weights of all types. Performing this calculation separatelyfor first, second, third, ···, last trial allows us to track how the relative influence of different decision systems (i.e., the model-based system, the Pavlovian system, and model-free systems) and other factors (i.e., mental effort avoidance andstopping criteria) changes as people learn how to plan. For the hypothetical data set illustrated in Fig. 3our computational microscope would likely infer that the par-ticipant started out relying primarily on structural features (a sub-category of model-free values and heuristics), satis- ficing features, and mental effort avoidance. Furthermore, itwould most likely infer that the participant then transitionedto relying increasingly more on model-based metareasoning features. Measuringcognitiveplasticity Our method makes it possible to measure how people’s approach to planning changes at multiple levels of resolu-tion across time scales ranging from seconds to decades. It can resolve changes in people’s planning at the level of individual planning operations, planning strategies, strategytypes, and the contributions of different decision systemsand other factors. By default, our method’s temporal res- olution is the amount of time that passes from one trialto the next. This makes it suitable for reverse-engineering the learning mechanisms through which people discover and continuously refine their planning strategies (Jain et al.,2019 ). It can also measure how people’s approach to plan- ning evolves over longer time scales, such as blocks, ses- sions, years, and decades. This makes the computationalmicroscope suitable for investigating how people learn how to plan and how they adapt their planning strategies to new environments. Figure 6illustrates the computational micro- scope’s ability to reveal how people’s propensities towardsdifferent types of planning strategies evolve as they learn how to plan in the task illustrated in Fig. 1; to obtain these results we applied the computational microscope to the datafrom the control condition of the experiment described in Appendix A.1. The output of the computational microscope revealed that the strategies which explore the final outcomesfirst and terminate upon finding a high value became the most frequent strategy type. During this transition people shifted away from frugal planning strategies (i.e., strate-gies that explore only a few outcomes) which were themost common strategies at the start of the experiment along with the myopic planning strategies (strategies that explore immediate outcomes first). The miscellaneous strategiesalso decreased in frequency. The frequency of local search (i.e., the strategies that focus on information about subtrees or paths that have been explored the most so far) and fru-gal goal-setting strategies (i.e., strategies that start exploring from the final outcomes and only explore a few out- comes) initially became more frequent and then decreasedagain. In addition, the computational microscope can also be used to measure the transfer of learning from one task to another. Traditionally, transfer effects are established bydemonstrating the training’s effect on people’s average per- formance in an untrained task. The computational micro- scope makes it possible to determine whether people trans-fer the specific strategies they learned in the training task to untrained tasks. To illustrate this, we applied the com- putational microscope to data from a transfer experimentin which participants practiced planning in a simple, smallenvironment and were then tested on a larger and more com- plex environment. Concretely, the participants in the second experiment from Lieder ( 2018b ) performed the five-step planning task illustrated in Fig. 7c after having practiced planning in the three-step planning task illustrated in Fig. 1 with optimal feedback (experimental condition) or withoutfeedback (control condition). As shown in Fig. 8,t h ec o m - putational microscope revealed that participants from both conditions transferred the near-optimal goal-setting strat-egy they had learned in the three-step planning task to thefive-step planning task. BehaviorResearchMethods Fig.6 Measured time course of frequencies of strategy types in the experiment described in Appendix A.1. Furthermore, our approach can also be used to charac- terize how people’s approach to planning changes acrossthe lifespan (Das et al., 2019 ). Finally, our method can also be used to detect and compare the effects of (pedagogical) interventions on how people learn how to plan and to elu-cidate inter-individual differences in metacognitive learning(e.g., in psychiatric disorders). Astep-by-stepguidetomeasuringhow peoplelearnhowtoplan Experimenters can make use of our paradigm and our com- putational microscope very easily. In this section, we pro-vide a tutorial like introduction for running experiments with the Mouselab-MDP paradigm and applying the compu- tational microscope on data generated using the Mouselab-MDP paradigm. Astep-by-stepguidetocreatingandrunning process-tracingexperimentswiththeMouselab-MDPparadigm Having motivated the paradigm, we briefly describe both the interface through which experimenters specify experiments,and the interface through which participants engage in the task. Two screenshots of the paradigm are shown in Fig. 9, and a live demo can be viewed at http://cocosci.princeton. edu/webexpt/mouselab-demo/ . The code for Mouselab- MDP and an example of how to use it are available at https:// github.com/RationalityEnhancement/Mouselab-MDP .On each trial, an environment is conveyed by an intuitive visualization (see Fig. 9). Formally, each environment corresponds to a directed graph with states as nodes and actions as edges. The participant navigates through the graph using the keyboard, attempting to collect the maximaltotal reward. States or edges are annotated with the rewardfor reaching the state or taking the action. Crucially, these labels may not be visible when the trial begins. Rather, the participant may need to click or hover their mouse over astate or edge to see the associated reward. The timecourse of these information-gathering operations provides fine- grained information about the person’s planning strategy.Furthermore, our paradigm allows researchers to investigate how people negotiate the tradeoff between the cost of thinking and its benefits. This can be done bymanipulating the cost of information gathering; for instanceby charging participants a certain number of points per click. With the Mouselab-MDP jsPsych plugin, experimenters can create a planning experiment by specifying the follow- ing critical components: 1.graph is a mapping s/mapsto→Afrom a state sto action contingencies A. Each action contingency is a mapping a/mapsto→(r, s /prime)from an action to a reward rand the next state s/prime. The graph structure thereby specifies the actions aavailable in each state, as well as the reward r and resultant state s/primeassociated with each action. 2.initial is the state in which the participant begins the trial. 3.layout is a mapping s/mapsto→(x, y ) that specifies the location of each node on the screen. BehaviorResearchMethods Fig.7 Illustration of the environment with decreasing variance (a), the environment with constant variance (b), and the five-step version of the environment with increasing variance (c). In the environment with decreasing variance, the rewards at the first, second, and third step are sampled uniformly at random from the sets {−48,−24,+24,+48},{−8,−4,+4,+8},a n d{−4,−2,+2,+4}, respectively. In the environment with constant variance, the rewards at all locations are independently sampled from the same uniform distribution over the set {−10,−5,+5,+10}. In the five-step planning task with increasing variance the rewards at steps 1 to 4 are drawn from normal distributions with mean 0 and standarddeviation σ 1=20,σ1=21,σ1=22,a n dσ1=23, respectively, and the reward at step 5 is drawn from a normal distribution with mean 0 and standard deviation σ5=25 Specifying only these settings will result in a graph with rewards shown on the edges between nodes and no labels on the states. To take advantage of additional Mouselab features, the user must specify at least one of the following optional properties: 1.stateLabels is a mapping s/mapsto→/lscriptthat specifies the labels to be shown on each state. 2.stateDisplay ∈{‘never’, ‘hover’, ‘click’, ‘always’ }specifies when state labels are displayed. When set to ‘click’, clicking on the state causes the label to appear and remain visible until the end of the trial. The optionalparameter stateClickCost specifies the cost (a negative number) for clicking on a single state. When set to ‘hover’, the label appears only while the mouse ishovering over the associated edge. There is no cost for this option because the participant’s mouse might pass over an edge by accident. 3.edgeLabels is analagous to stateLabels , except that it defaults to the rewards associated with each edge. 4.edgeDisplay is analagous to stateDisplay . edgeClickCost specifies the cost. Using this concise yet flexible plugin, various state- transition and reward structures can be displayed auto- matically. This allows experimenters to quickly create a large number of highly variable stimuli. Our pluginthereby enables experimenters with only basic knowl- edge of JavaScript to create a wide range of qualita- tively novel experiments that can be run online with BehaviorResearchMethods Fig.8 Comparison of frequencies of strategy types between the environment with increasing variance and transfer task. For a detailed description of the strategy types see Appendix A.4 crowd-sourcing services such as Amazon Mechanical Turk. Step-by-stepguideonusingthecomputational microscope Given a data set collected with the Mouselab-MDP paradigm with uniform click costs and no edge rewards, our computa- tional microscope can be used to obtain a detailed analysisof how the participants learned how to plan without any additional programming . Here, we provide a step-by-stepguide to applying the computational microscope. To help users get started with the computational microscope without having to collect data first, the computational microscopecomes with data from four experiments using the tasks illustrated in Figs. 1and 7a-c, respectively. The compu- tational microscope provides information about the strat-egy sequence, the amount of noise in the application of the a) State values revealed with clicks b) State values shown while hovering the mouse Fig.9 Two example paradigms created with the Mouselab-MDP plu- gin for JsPsych: a) Each state is labeled with the reward for reachingthat state; these rewards become visible after they are clicked, witha $0.10 fee per click. b) The reward for making a transition is revealed only while the mouse is hovering over the correspondingarrow BehaviorResearchMethods strategy, the sequence of strategy types and the change in the relative frequency of decision systems. The computationalmicroscope requires git and Python3 to be installed on the user’s machine. The following steps describe how to apply the computational microscope to a data set and the output itprovides. 1. Access data sets and the source code of the computa- tional microscope by cloning the corresponding githubrepository using the command: git clone https://github.com/Rationality Enhancement/ComputationalMicroscope.git The repository includes four data sets that are contained in the folder data/human/ . For a detailed description of these data sets, see Table 2 2. Navigate to src/ and install the package require- ments running the following command in the cloned repository’s root directory: pip3 install -r requirements.txt 3. Apply the computational microscope on any of the 4 data sets described in Table 2using the following command: python3 infer_sequences.py <dataset> <block> <condition> The values that the parameters in the above command take can be found out by using the command: python3 infer_sequences.py help Here, the parameters <dataset> ,<block> and <condition> define the name of the dataset, the block of the experiment which generated the dataset, and the condition of the experiment, thecomputational microscope is to be run on. Uponsuccessful completion, a dictionary with the par- ticipant IDs as keys and the strategy sequences asits values are stored as a pickle file in the path "results/inferred sequences/<dataset> <block> <condition> strategies.pkl" and the corresponding noise parameter val- ues, in the same format, are stored in "results/inferred sequences/<dataset> <block> <condition> temperatures.pkl" . For example, to run the computational microscope on the test block of the dataset with increasing variancefor participants who belong to the condition withoutfeedback, run the following command: python3 infer_sequences.py increasing_variance train none 4. Analyze the generated sequences by running the com- mand: python3 analyze_sequences.py <dataset> <block> <condition> This command produces plots of the trial-by-trial changes in the frequencies of the top-5 strategiesand strategy types, and in the influence of differentdecision systems and other factors. It integrates the data from all participants into the plots in the "results/<dataset> plots" directory. For example, the following command generates the plots shown in Fig. 10. python3 analyze_sequences.py increasing_variance test none The computational microscope, in its current imple- mentation, can be applied to task structures that aresymmetric and do not have cycles. But the general approach described in this article works for arbitrary environments. Table 2 Datasets included in the computational microscope repository Dataset increasing variance (v1.0) decreasing variance (c2.1) constant variance (c1.1) transfer task (T1.1) Blocks training, test training, test training, test pre-training, training, test Condition meta, action, none none none noneDescription This dataset contains process- tracing data from the exper- iment with the environment s h o w ni nF i g . 1. The experi- ment consisted of 10 trainingtrials and 20 test trials had threeconditions that determined thekind of feedback that was pro-vided to the participants.This dataset contains process- tracing data from the exper- iment with the environment shown in Fig. 7a. The experi- ment consisted of 30 trainingtrials and 30 test trials onlyhad a single condition.This dataset contains process- tracing data from the exper- iment with the environment shown in Fig. 7b. The experi- ment consisted of 30 trainingtrials and 30 test trials onlyhad a single condition.This dataset contains process- tracing data from the exper- iment with the environment shown in Fig. 7c. The exper- iment consisted of 1 pre-train-ing trial, 10 training trials 20test trials only had a singlecondition. Reference Appendix A.1 Callaway et al. ( 2018 ) Callaway et al. ( 2018 )L i e d e r ( 2018a ) The value in brackets references the experiment number in the code BehaviorResearchMethods a) b) c) Fig. 10 Generated analysis plots for training block of the no feedback condition of the increasing variance data set. aInfluence of different decision systems and other factors. bTrial-wise changes in strategy type frequencies. cTrial-wise changes in strategy frequencies The implementation and a detailed tutorial on applying the computational microscope to a custom dataset are available at https://github.com/RationalityEnhancement/ ComputationalMicroscope . Doesitwork? To test whether using the computational microscope in conjunction with the Mouselab-MDP paradigm is a reliableway to measure how people plan, we test this approach using simulations and empirical data. First, we perform simulations to test our hypothesis that the Mouselab-MDPparadigm yields so much information about how peopleplan that it becomes possible to accurately infer which planning strategy they used on a single trial and how that strategy differed from the strategies that the participant usedon the preceding trial and on the following trial. In follow- up simulations we then assess whether this is also true for the relative contributions of different decision systems.Following these simulation studies, we test whether the inferences of our method are valid measures of planing andlearning by applying it to empirical data from studies where planning and learning were experimentally manipulated. Simulationstudies To test if our experimental paradigm makes it possible to infer people’s planning strategies on a trial-by-trial basis, wesimulated which process-tracing data we would obtain in a Mouselab-MDP experiment depending on which strategies people use and how those strategies change from each trialto the next. We then applied our computational microscope to the simulated process-tracing data to test if that data would be sufficiently informative about the underlyingplanning strategies that we would be able to infer themcorrectly. Concretely, we report two sets of simulations suggesting that our method can accurately measure changes in people’s planning strategies and the relative influence ofdifferent decision systems, respectively. Is the process-tracing data from the Mouselab-MDP paradigm sufficiently informative about people’s planningstrategies? We simulated a Mouselab-MDP experiment BehaviorResearchMethods with 31 trials of the 3-step planning task illustrated in Fig. 1 and described in Section 5for various different sequences of planning strategies. We derived six sets of sequences of planning strategies from five different models of how people might learn how to plan. To generate the first dataset, we applied the rational model of strategy selection learning by Lieder and Griffiths ( 2017 ); the parameters of this model were fit to the data from 57 participantsperforming 31 trials of the 3-step planning task illustrated in Fig. 1(i.e., the control condition of the experiment described in Appendix A.1). We created four additional data sets by modeling the temporal evolution of people’splanning strategies as gradual learning, insight-like learn- ing, a mixture of both gradual and insight-like learning, or a random process that chooses the strategy on each trialindependently at random (random model). In all cases, the generation of the strategy sequence and the generation of each click sequence given the sampled strategy involveda considerable amount of randomness that matched or exceeded the variability observed in human data. For a more detailed description of how the data was generated,please see Section 5in the Appendix. To avoid bias towards any one of the five models, we used each of them to gen- erate a data set with 500 simulated participants completing 31 trials each. We then combined the resulting five data setsinto a single data set from 2500 simulated participants. We then used our computational microscope to compute the maximum a posteriori estimate of the sequence of strate-gies for each participant and compared it to the ground truth sequence of strategies. We evaluated the informa- tiveness of our process-tracing paradigm in terms of howaccurately the strategies and strategy types could be inferredfrom the simulated process-tracing data. We found that the process-tracing data made it possible to infer the true strat- egy for 80 ±0.01% of the trials and to infer the true strategy type for 92 ±0.00% of them. These findings suggest that our experimental paradigm yields so much information that we can hope to be able to infer people’s planning strategieson a trial-by-trial basis. Furthermore, these results sug- gest that we have implemented our computational method correctly and that the 79 candidate strategies are differ-ent enough that it is possible to discern between them. Fora detailed description of model-wise strategy and strategy type accuracies, please see Appendix A.3. Validationofmeasuringthecontributionsofdifferentdeci- sionsystems andotherfactors We validated our method’s ability to recover the trend in the relative influence of dif- ferent decision systems and other factors across a series of 79 trials. Each simulation assumed one of three pos-sible trends: increasing influence, decreasing influence, orconstant influence. For each factor, for the increasing and decreasing trends, we created a sequence of 79 strategiesin which each strategy appears only once and the order of the strategies in the sequence is the sorted order of the contribution of the factor to the corresponding strat-egy. We then generated a dataset of 500 sequences of click sequences. For the constant case, for each factor, we parti- tioned the set of strategies into up to 3 groups based on the33 rd,6 7thand 100thpercentiles of the relative influence of the factor across all strategies. We validated our micro- scope on 500 simulated sequences. To generate a sequence,we randomly selected one of the three groups to generatesequences from and then sampled 79 strategies from that group and arranged them in sequence. Figure 11shows that our computational microscope recovered the trends in therelative influence of the decision systems and other factors very accurately. Validationonempiricaldata We also validated our computational microscope on empirical data, that is we tested whether it can detect theeffects of experimental manipulations and task structure on people’s planning strategies and metacognitive learning. Detectingtheeffectoffeedbackoncognitiveplasticity To verify whether our computational microscope can detect the effect of an experimental manipulation expected to pro-mote cognitive plasticity, namely feedback, we applied it tothe Mouselab-MDP process-tracing data from the experi- ment described in Appendix A.1 where 164 participants solved 30 different 3-step planning problems of the forms h o w ni nF i g . 1. Participants in the control condition received no feedback whereas participants in the first exper- imental condition received feedback on their actions (ActionFB) and participants in the second experimental condi- tion received feedback on how they made their decisions (Metacognitive FB). Action FB stated whether the chosenmove was sub-optimal and included a delay penalty whoseduration was proportional to the difference between the expected returns of the optimal move versus the chose one. In contrast to Action FB, Metacognitive FB pertains to howthe decisions are made rather than to the decisions them- selves. Metacognitive FB is given after every information gathering operation (click). It has two components that con-vey the informational value of the planning operation and the planning operation that the optimal strategy would have chosen, respectively. This metacognitive feedback was designed to be more effective than action feedback at teaching people the optimal BehaviorResearchMethods Gr ound Truth Measured Values Fig. 11 Smoothed plots for comparison of the actual and inferred trends in the relative influence of different decision systems and otherfactors. The computational microscope was applied to click sequencesgenerated from strategy sequences where the weight of one of thefive factors was systematically increasing (top row), decreasing (cen- ter row), and constant (bottom row) respectively. Each line is based ona different strategy sequence planning strategy for the task illustrated in Fig. 1.T h i s strategy (Callaway et al., 2018 ) starts by searching the potential final destinations for the best possible outcome and terminates planning when it finds one of them.As Fig. 12shows, the computational microscope cor- rectly detected that feedback boosted metacognitive learning.Concretely, the computational microscope revealed that metacognitive feedback boosted the discovery of the optimal BehaviorResearchMethods Fig.12 Comparison of frequencies of forward-planning and near-optimal strategies across different types of feedback in the experiment described in Appendix A.1. The green, orange and the blue lines represent the metacognitive feedback, action feedback and the no feedback conditions respectively. The circles represent the forward planning strategies and the stars represent the near-optimal planning strategies planning strategy (58% vs. 31% in the no feedback condition, z=15.44,p< 0.0001, BF >1000)2and decreased people’s propensity to start planning by considering immediate out- comes, i.e. forward planning (2% vs. 14% in the no feed- back condition, z=− 13.27,p < 0.0001 ,BF>1000) whereas action feedback reduced the frequency of the near- optimal planning strategy (24% vs. 31% in the no feedback condition, z=− 4.74,p < 0.0001 ,BF>1000) and did not change the frequency of the forward planning strategies(15% vs. 16% in the no feedback condition, z=1.00,p= 0.3193 ,BF=0.10 ). The computational microscope allows us to gain addi- tional insights into how those changes in people’s strategies come about. Concretely, correcting for multiple compar- isons ( α sidak = 0.0034) and applying Wilcoxon-signed rank test, Fig. 13shows that metacognitive feedback significantly accelerated people’s transition to choosing their planning operations increasingly more based on the model-basedmetareasoning system ( T=248,p=0.0004 ,BF=65.31), the Pavlovian system ( T=276,p=0.0007 ,BF=38.15), 2Here and throughout the manuscript, we always report the results of a Bayesian hypothesis testing along with their frequentist equivalents.We performed Bayesian hypothesis testing using the BayesFactorpackage in R (Morey & Rouder, 2018 ) for all tests except the Wilcoxon test for which we used JASP (JASP Team, 2021 ). We report the results of each Bayesian hypothesis test in terms of the Bayes factor in favor of the alternative hypothesis. BF=xmeans that the alternative hypothesis is xtimes as likely to be true as the null hypothesis.and the system for deciding when to stop planning ( T= 82,p< 0.0001 ,BF=23568.70). This makes sense because the structure of the environment makes it beneficial to inspect nodes that are most uncertain (a feat accomplished by the metareasoning system), explore nodes that lie on thepath to the most valuable nodes (as recommended by the Pavlovian system), and to stop as soon as a very good path has been identified (a feat that accomplished by the systemfor deciding when to stop). Also, Metacognitive feedback,in general, drove people towards planning more by reduc- ing the amount of mental effort avoidance ( T=1.0,p= 0.0001 ,BF=167.25). Action FB, by contrast, drove people towards relying more on the Pavlovian system ( T=183,p=0.0004 ,BF=1236.80), and the decision system for deciding when to stop planning(T=134,p=0.0001 ,BF=685.42) and relying less on the model-free values and heuristics ( T=229,p= 0.0004 ,BF=172.56) decision system. In the condition without feedback, people relied increasingly more on thePavlovian system ( T=148,p < 0.0001 ,BF=1852.39), the system for deciding when to stop planning ( T= 173,p < 0.0002 ,BF=647.56) and on the model-based metareasoning system ( T=206,p=0.0012 ,BF=38.51) but less significantly when compared to the metacognitive feedback condition. The computational microscope also provides insights into which unique strategy types people go through during learning (learning trajectories) and how this is affected by BehaviorResearchMethods )b )a c) Fig.13 Temporal evolution of the relative influence of different decisions systems and other factors in the control condition without feedback ( a), the experimental condition with metacognitive feedback ( b), and the experimental condition with action feedback ( c), respectively feedback. Overall, we found that 86% of people’s learning trajectories were unique. However, when we zoom out tothe level of strategy types, the computational microscope reveals several common learning trajectories (see Table 3). We found that the number of strategy types people go through from their initial strategy to the final strategy waslower when participants received metacognitive feedbackthan when they received action feedback ( t(107)=− 3.73, p=0.0002 ,BF=161.30) or no feedback ( t(107)= −2.65,p=0.0046 ,BF=8.77). We found no significant difference between the Action FB and the No Feedbackconditions ( t(106)=1.46,p=0.0737 ,BF=0.09) Measuringhowpeople’splanningstrategiesdiffer dependingonthestructureoftheenvironment Previous work has shown that people adapt their cogni- tive strategies to the structure of the decision environment(Payne et al., 1993 ; Callaway et al., 2018 ; Lieder & Grif- fiths, 2017 ; Gigerenzer & Selten, 2002 ). Here, we verify that our method is able to detect differences in people’s strate- gies across the four environments described in Section 5.To do so, we applied the computational microscope to the process-tracing data participants generated in the test blocks of the corresponding experiments after they had learned about their respective environment in the training block(see Table 2). Because participants went through a suffi- ciently large number of training trials, we observed that participants’ planning strategies were stable. As shown inTable 4, the computational microscope revealed that peo- ple adapted their planning strategy to the structure of their environment. These differences are systematic in the sense that how people’s strategy choices differ across environ-ments roughly corresponds to how the strategies’ perfor- mance differs across those environments. To quantify this, we report the relative performance ( r rel)o ft h em o s tc o m - mon strategies relative to the best-performing strategy of each environment. The performance of each strategy ( ri) was determined by running 100,000 simulations, and then normalized according to rrel i=ri−minjrj maxjrj−minjrj. For both environments with increasing variance, our computational microscope detected that the most common strategy was the near-optimal goal-setting strategy whichexploits that the most distant rewards are most variable. BehaviorResearchMethods Table3 Common trajectories of strategy types by the type of feedback participants received Condition Usage % Strategy type trajectory No FB 7% FP 6% MP →LS→MGS 6% MGS 4% FP-BFS →MGS 4% MS →MGS MCFB 22% MGS 16% MS →MGS 7% FGS →MGS 7% FP →FGS→MGS 5% FP →MGS Action FB 11% FP 4% MP4% FP →MP→FP→MP 2% LS →MS→MGS→LS→MS→ MGS→MS→MGS 2% MP →FP→MP→FP→MS→MGS The strategy types are: FP - Frugal planning, MP - Myopic planning, MGS - Maximizing goal-setting with limited backward planning, LS -Local search, FP-BFS - Forward planning like Best-First search, FGS- Frugal goal-setting strategies, MS - Miscellaneous strategies By contrast, people almost never used this strategy in any of the other environments. For the environment with decreas- ing variance, our computational microscope detected thatpeople primarily use strategies that exploit the structure of this environment by prioritizing its immediate outcomes. For the environment with constant variance, the com- putational microscope detected that after inspecting allimmediate outcomes the second most frequent strategy per- forms Best-First Search with Satisficing, which is adaptive in this environment (Callaway et al., 2018 ), although the most commonly used strategy was not particularly adaptive. These results show that the computational microscope can reliably reveal how the planning strategies peopleuse differ depending on the structure of the environment. Furthermore, comparing the strategies the computational microscope inferred for the 5-step version of the increasingvariance environment that was used as a transfer task tothe 3-step version of that environment that was used as a training task suggests that the computational microscope can reveal the transfer of learning across environments. Equally, the strategy types inferred by our computational microscopes were consistent with previous findings sug- gesting that people adapt their decision strategies to thestructure of the environment (Payne et al., 1993 ; Callawayet al., 2018 ; Lieder & Griffiths, 2017 ; Gigerenzer & Sel- ten,2002 ). Table 5shows the performance and frequency of the inferred strategy types in decreasing order of their fre- quency for each of the 4 environments. The performance of a strategy type was determined by the weighted aver-age of the performances of the strategies belonging to that strategy type where the weight of a strategy is the relative frequency of the strategy among the strategies belonging tothe cluster. As expected, we find that in both increasing vari- ance environments, people primarily rely on strategies that prioritize the potential final outcomes. For the environmentwith decreasing variance, the computational microscopeinferred that most people used the strategy type that is best adapted to this type of environment, namely myopic plan- ning strategies. For the environment with constant variance,the computational microscope inferred that forward plan- ning strategies similar to best first-search was the second most frequently type of planning strategies. The most com-mon strategy type was “Myopic Planning” which includes several strategies that are similar to Best First Search (see Section 5). Overall, the results in Tables 4and5illustrate that our computational microscope makes it easy for researchers to describe both the adaptiveness of human planning and its limits. Discussion We have developed a computational process-tracing method that allows us look at how people plan and how their plan- ning strategies change over time. Our method extends theMouselab paradigm for tracing people’s decision strate- gies (Payne et al., 1993 ) in three ways. First, it progresses from one-shot decisions to sequential decision problems.Second, it introduces computational methods for analyzingprocess tracing data in terms cognitive strategies. Third, we have extend the approach to measuring how people’s plan- ning strategies change over time. Our method is easy touse and freely available. We have successfully evaluated our methods using simulations and human data. The results suggest that our computational microscope can measurecognitive plasticity in terms of the temporal evolution of people’s cognitive strategies and also provide us with valu- able information about the trends in changes of strategies,strategy types and also how people change their strategieswith changes in environments. We have applied our compu- tational microscope to a number of data sets. The results of these analyses contribute to a more detailed understandingof how people plan and revealed some interesting empirical characteristics of metacognitive learning. Our method can be used to study many different types of cognitive change across a wide range of different timescales. BehaviorResearchMethods Table 4 Summary of the performance of the most frequent strategies across four different environments Environment Most common strategies Relative score ( rrel) Frequency Increasing Variance (3-steps) Search for the best possible final outcome 1.00 45.1% Consecutive second maximum 0.93 11.6%No planning 0.13 10.6% Increasing Variance (5-steps) Search for best possible final outcome 1.00 20.8% No planning 0.28 16.2% Explore immediate and final outcomes with satisficing on finding al a r g ev a l u e0.97 12.4% Explore final outcomes and their parents 0.92 11.6% Explore final outcomes and their parents with satisficing 0.97 6.6%Explore immediate outcomes on the paths to the best final outcomes 0.99 4.5% Decreasing Variance (3-steps) Explore immediate outcomes and final outcomes with satisficing on a positive value0.97 32.7% Satisficing Best First Search after exploring all immediate outcomes0.94 26.7% No planning 0.00 12.5% Explore immediate outcomes and final outcomes with satisficing on a positive value0.96 10.8% Explore sub-trees of positive immediate outcomes 0.93 5.4% Explore all immediate outcomes 1.00 5.3% Constant Variance (3-steps) Explore all immediate outcomes with satisficing 0.81 17.9% Satisficing Best First Search after exploring all immediate outcomes0.91 14.4% Explore the immediate children of the best immediate outcome 0.81 10.2% Non-terminating Best First Search 0.74 6.4% Exploring immediate and final outcomes with positive satisficing 1.00 5.5% Best First Search after exploring all immediate outcomes 0.75 5.5%Pruning of nodes with immediate negative rewards and choosing actions with best long-term consequences0.94 4.8% Leave out one immediate outcome 0.79 4.4% Consecutive second maximum 0.85 4.0%Goal-setting with positive satisficing 0.90 3.0% The performance of each strategy ( ri) was determined by running 100,000 simulations, and then normalized according to rrel i=ri−minjrj maxjrj−minjrj. To be included in this table, a strategy had to be used in at least 3% of all trials This makes it suitable for investigating learning, cognitive development, decision-making, individual differences, and psychopathology. We are optimistic that computational microscopes will become useful tools for investigating the learning mecha- nisms that enable people to acquire complex cognitive skillsand shape the way we think and decide. This will be animportant step towards reverse-engineering people’s ability to discover and continuously refine their own algorithms. From a psychological perspective, this line of work mightalso help us understand why we think the way we do and lead us to rethink our assumptions about what people can and cannot learn. Developmental psychologists could useour method to trace the development of cognitive strategies BehaviorResearchMethods Table 5 Summary of the performance of the most frequent strategy types for four different environments Environment Most common strategy types Relative score ( rrel) Frequency Increasing variance (3-steps) Maximizing goal-setting with limited backward planning 1.00 62.9% Frugal planning strategies 0.00 11.0%Miscellaneous strategies 0.81 10.2% Local search strategies 0.92 5.5% Myopic planning strategies 0.41 4.7%Frugal goal-setting strategies 0.75 3.8% Increasing variance (5-steps) Maximizing goal-setting with limited backward planning 0.86 38.9% Miscellaneous strategies 0.63 24.6% Frugal planning strategies 0.00 16.2%Frugal goal-setting strategies 0.58 6.8% Immediate rewards on the path to best final outcomes with satisficing1.00 4.5% Local search strategies 0.64 4.1% Myopic planning strategies 0.33 3.2% Decreasing variance(3-steps) Myopic planning 1.00 53.4% Forward planning strategies similar to Best First Search 0.85 33.0%Frugal planning strategies 0.10 12.6% Constant variance (3-steps) 3 Myopic planning 0.76 46.8% Forward planning strategies similar to Best First Search41.00 26.8% Frugal goal-setting strategies 0.65 7.5%Maximizing goal-setting with limited backward planning 0.75 6.8% Miscellaneous strategies 0.22 4.6% Local search strategies 0.40 3.5% 3The strategy type with the highest average score was “Frugal planning strategies” ( rrel=1.00, frequency: 2.1%). Its score is so high because its most frequent strategy is a high-performing strategy similar to Best First Search. This strategy type is not listed because its relative frequency is less than 3% 4The average score of this adaptive strategy type is surprisingly low because it includes strategies that incur a very high planning cost by inspecting all of the information available The performance of each strategy was determined by running 100,000 simulations. The performance of a strategy type was determined by the weighted average of the performances of the strategies belonging to that strategy type where the weight of a strategy is the relative frequency ofthe strategy among the strategies of the same type. To be included in this table, a strategy type had to be used in at least 3% of all trials across the lifespan and elucidate how learning contributes to those developmental changes. Similarly, clinical psy- chologists and computational psychiatrists could apply it to trace how person’s cognitive strategies changes as theydevelop and recover from different mental disorders. Impor- tantly, our method can also be used to investigate how cognitive plasticity depends on the learning environment,individual differences, age (Das et al., 2019 ), time pres- sure, motivation, and interventions – including feedback,instructions, and reflection prompts. Using our method to measure individual differences in cognitive plasticity might reveal why equivalent experience can have fundamentally different effects on the psychological development of dif-ferent people. This, in turn, can help us understand why some people are predisposed to develop certain cognitive styles, personalities, and mental disorders. Applications incomputational psychiatry might use this approach to under-stand the development of mental disorders and to create BehaviorResearchMethods computational assays for detecting whether a person is at risk for developing specific forms of psychopathology longbefore its symptoms occur. To facilitate these applications, future work might extend the proposed measurement model to continuous strategyspaces, a wider range of tasks and strategies, and learn- ing at the timescale of individual cognitive operations. In addition, future work will also leverage our computationalmicroscope to elucidate individual differences in cogni- tive plasticity within and across psychiatric conditions and different age groups. We will also work on making ourinferences more precise by learning models of strategiesand strategy transitions from human data. To move towards a more naturalistic planning task, future versions of our method could present participants with fully-revealed envi-ronments and infer their planning strategies from eye- tracking data. The computational approach could be anal- ogous to the one presented here instead that clicks arereplaced by saccades. The ideas of our approach are not entirely novel. Process- tracing has already been extensively used to study peo-ple’s decision strategies (Payne et al., 1993 ; Schulte- Mecklenbeck et al., 2011 ; Schulte-Mecklenbeck et al., 2019 ) and Bayesian inference has been used to infer which decision strategies are include in individual participants’repertoire (Scheibehenne et al., 2013 ), when people switch between different decision strategies (Lee & Gluck, 2021 ), and which strategies people use in economic games Costa-Gomes and Crawford ( 2006 ), Crawford ( 2008 ), and Costa- Gomes et al. ( 2001 ). Our method has several advantages. What differentiates our approach from the original Mouselab paradigm (Payne et al., 1993 ) is that it measures how people plan and that we infer people’s strategies from the process-tracing data. On a high level, the Bayesian Toolbox approach by Scheibehenne et al. ( 2013 )a l s oi n f e r s people’s strategies. Their approach infers which strategies are included in the person’s repertoire. However, it does not attempt to resolve which strategy was used on whichtrial. Instead, it makes the simplifying assumption that every decision is influenced by all strategies that are in the person’s toolbox. By contrast, our method makes thedifferent assumption that on each trial each participantdraws a single strategy from the toolbox. Based on this assumption, our method infers which individual strategy a participant used on the first trial, which individual strategythey used on the second trial, and so on. The methods developed by Lee and Gluck ( 2021 )a n d Lee et al. ( 2019 ) are more similar to our method in that they infer which strategy each participant used on each trial of the experiment. The main difference is that these methods were developed for studying multi-cue decision-making whereas our method was developed for studyingplanning. The method by Lee et al. ( 2019 )h a st h eadvantage that it uses process-tracing data, verbal reports, and choices whereas our method exclusively relies on theprocess-tracing data. While our method and Lee et al. (2019 ) analyze the data of each participant individually, the method by Lee and Gluck ( 2021 ) additionally performs inference at the group level and constrains inferences about individual participants by the characteristics of the group. Furthermore, the method by Lee and Gluck ( 2021 ) additionally infers two aspects of the generative model of strategy sequences from the data, namely the probabilities of possible initial strategies and the probabilities of possiblestrategy transitions. The main advance of our method is thatit differentiates between a much larger number of different strategies (79 vs. 4). Furthermore, we examined multiple alternative models of strategy transitions and validated ourmethod on data from multiple different experiments that varied the decision environment and induced systematic learning-induced changes in people’s strategies over time. Finally, the approaches that have been developed to infer which strategies people use in economic games (Costa- Gomes & Crawford, 2006 ; Crawford, 2008 ; Costa-Gomes et al., 2001 ) assume that each person always uses the same strategy and cannot measure how a person’s strategy changes over time. Furthermore, the strategies these meth- ods measure are specific to strategic social interaction. Thestrategies people use in tasks such as planning a road trip or project are very different. Therefore, studying them requires a different methodology such as the one we have developedin this work. In conclusion, the approach introduced in this article complements these existing approaches in useful ways thatmake it possible to measure people’s planning strategies andhow they discover them. Our methods are not without limitations. First and foremost, the Mouselab-MDP paradigm inherits at least one of the limitations of the Mouselab paradigm that it is based on. Concretely, the Mouselab-MDP paradigm might change how people plan by making information acquisition costlierthan it might otherwise be. Previous research comparing Mouselab-based measures of people’s decision processes against equivalent measures based on eye-tracking foundthat the increased cost of information acquisition in theMouselab paradigm led people to acquire less information and, to some extent, it also changed the order in which people acquire information (Lohse & Johnson, 1996 ). We believe that it is likely that similar differences also exist for the Mouselab-MDP paradigm. As Lohse and Johnson (1996 ) pointed out, such differences are more important for some research questions than for others. Following the logic of their analysis, we believe that there are many important questions about planning and metacognitive learning thatare unaffected by such differences. Concretely, our methodshould be well-suited to characterize the qualitative effects BehaviorResearchMethods of experimental manipulations on planning and learning as long as it can be expected that the qualitative effects wouldbe the same if the cost of information acquisition was lower. Regardless thereof, we believe that comparing the process- tracing data collected with the Mouselab-MDP paradigm tocorresponding process-tracing data based on eye-tracking is an interesting direction for future work. A perhaps more provocative possibility is that the planning environment that the Mouselab-MDP paradigm seeks to emulate is one in which people cannot simply look up what the outcomes of their actions would be buthave to estimate them through effortful mental simulations.In this sense, it is conceivable that the Mouselab-MDP paradigm is closer to the real-world problem that is designed to mimic than an equivalent eye-tracking paradigm wouldbe. This suggest that future work should compare the plans that people arrive at when they have to rely on mental simulations to the plans that they arrive at when thosemental simulations are externalized with the Mouselab- MDP paradigm. One limitation of our computational microscope is that its current implementation requires that the task environ-ment is symmetric and has no circular paths in it. This is because of the features defined in Eq. 1are computable currently only for such structures. Generalizing the imple-mentation of the computational microscope so that it can be applied to other kinds of environments may be a worthwhile direction for future work. In summary, our method makes it possible to more directly observe the previously hidden phenomenon of cog- nitive plasticity in many of its facets – ranging from skillacquisition, learning to think differently, cognitive decline,self-improvement, changes in cognitive dispositions, and the onset, progression, and recovery from psychiatric symp- toms and mental disorders. In conclusion, we believe thatthe method introduced in this paper can be used to advance cognitive science, psychology, and psychiatry in many promising ways. Funding Open Access funding enabled and organized by Projekt DEAL. AuthorContributions FL, FC, YRJ, PD, and TLG developed the ideas for the methods and article. YRJ and FC contributed software. YRJconducted simulations. FL, PMK, and FC collected the data. YRJ,and FC analyzed the data. FL, YRJ, and FC wrote the article. FL,YRJ, PD, FC, and TLG edited and revised the article. RH performeda computational reproducibility audit and ensured that all results are reproducible. Open Practices Statement None of the experiments were preregis- tered. The code of the Mouselab-MDP paradigm is available at: https:// github.com/RationalityEnhancement/Mouselab-MDP .The code of our computational process-tracing method is avail- able at: https://github.com/RationalityEnhancement/Computational Microscope . The data sets used in this article are available at: https://github. com/RationalityEnhancement/ComputationalMicroscope/tree/master/data/human . Appendix A.1Experiment(3-steptaskwithincreasing variance) We evaluated our computational microscope on data that was collect in a pilot experiment by Callaway et al. (2022a ). The methods and results of that experiment were as follows. Participants We recruited 164 participants on Amazon Mechanical Turk (average age 35 years, range: 18–72 years;75 female). Balanced condition assignment and repeat-partici- pant exclusion was performed using Psiturk (Gureckis et al., 2016 ). None of the participants who finished the experiment were excluded for analysis. Procedure The experiment comprised instructions, a train- ing block, a test block, and an exit survey. The trainingblock comprised 10 trials, and the test block comprised 20 trials. Each participant was assigned to receive either metacognitive feedback (55 participants), action feedback(55 participants), or no feedback (54 participants) dur- ing the training block. The metacognitive feedback used the optimal planning strategy for the environment to pro-vide feedback on participants’ clicks. The action feed-back condition provided feedback on the actions (moves) of the participants. The exit survey asked participants about what they had learned, their age, and their genderidentity. Materials Each trial of the experiment presented partic- ipants with an instance of the 3-step planning problem described in Fig. 1. The key structure of this problem is that the range of possible rewards is smallest in the first step,larger in the second step, and largest in the third step. Tooperationalize the cost of planning, we charged participants one virtual dollar per click. To simplify the implementa- tion of metacognitive feedback, we required that all clicksbe made before the first move. To eliminate the time cost of engaging in planning compared to speeding through the experiment, participants who spent less than 7 seconds onplanning (e.g., only 3 seconds) had to wait for the remaining BehaviorResearchMethods time after executing their moves (e.g., for 4 seconds). In the test block, participants started with an endowment of 50 vir-tual dollars and earned a bonus of 1 cents for every $5 they made in the game. Results In the test block, the average score of partici- pants receiving no feedback was 27.58 points/trial (95% CI: [26.21, 28.95]). Critically, participants receiving metacogni-tive feedback scored significantly higher (34.86 points/trial; 95% CI: [33.83, 35.89]; t(3268)=8.3268 ,p < 0.001,BF =5.39·10 13). By contrast, giving participants conventional feedback on their actions appeared to be ineffective. That is,participants receiving action feedback did not score signifi- cantly higher than participants in the no-feedback condition (27.57 points/trial; 95% CI: [26.21, 28.93]; t(3268)= 0.0108 ,p=0.504,BF=0.039) and performed significantly worse than participants who received metacognitive feed- back (t(3298)=−8.3425 ,p< 0.001,BF=6.13·10 13). A.2Modelingtransitionsbetweenplanning strategies The transitions between strategies from one trial to the next define the nature and the type of cognitive plasticity. One pos- sible way in which people might switch from one strategy to the other is based on the similarity of strategies (mea-sured in terms of distance between them). In addition to the similarity of strategies, we have to define how the similarities manifest themselves into the actual transitions.Therefore, we define different measures of similarity andmechanisms of how the distances manifest into strategy transitions. A.2.1Distances To find out if the frequency of transitions between strategies was dependent on how close the strategies are, we con-sider 6 metrics of distances between the strategies. Using these distance metrics, we did not find any correlation between the probabilities of transition from one strategy tothe other and the distance between them. In this section, we describe each of the three types of metrics and their con- stituents. 1. Behavioral distances The behavioral distance between two strategies is measured by the distance between the prob- ability distribution of clicks made by the two strategies. Weconsider two behavioral distances to quantify the similarity between the strategies.Jensen-ShannonDivergence /Delta1(s 1,s2)=JSD [p(d|s1), p( d|s2)] =KL/bracketleftbigg p(d|s1),p(d|s1)+p(d|s2) 2/bracketrightbigg +KL/bracketleftbigg p(d|s2),p(d|s2)+p(d|s1) 2/bracketrightbigg , (6) JeffreysDivergence(Symmetric-KLDivergence) /Delta1(s 1,s2)=JD[p(d|s1), p( d|s2)] =KL[p(d|s1), p( d|s2)] +KL[p(d|s2), p( d|s1)](7) 2. Distances in the strategy weight space This distance metric measures similarity between strategies based on the weight space representation of the strategy. For computing the distance between strategies according to this distancemetric, we represent a strategy using its preference for particular features which is quantified using the weights that are applied to the values of the features. Therefore, wequantify similarities between strategies by representing eachstrategy using a weight vector ( w) and then measure the similarity of the strategies as the similarity of the weight vectors. We consider two notions of similarity for strategyweight vectors. Manhattandistanceinthestrategyweightspace /Delta1(s 1,s2)=/bardblw1−w2/bardbl1, (8) Euclideandistanceinthestrategyweightspace /Delta1(s 1,s2)=/bardblw1−w2/bardbl2, (9) where /bardblx/bardblprepresents the p-norm of the vector x,i . e . /bardblx/bardblp=(n/summationdisplay i=1|xi|p)1/p(10) 3. Distancesin thedecision systemweight space This dis- tance metric measures similarity between strategies basedon the contribution of various decision systems to the strate- gies. To compute the distance between strategies according to this distance metric, we represent a strategy using itspreference for one decision system over the other. This pref-erence is quantified using the total preference over all the features of the decision system. Therefore, we measure sim- ilarities between strategies by representing each strategyusing a weight vector ( w ds) where each weight in the vec- tor quantifies a preference for a decision system and then BehaviorResearchMethods measure the similarity of the strategies as the similarity of the decision weight vectors. We consider two notions ofsimilarity for strategy decision system weight vectors. Manhattandistanceinthedecisionsystemweightspace /Delta1(s 1,s2)=/vextenddouble/vextenddouble/vextenddoublewds 1−wds 2/vextenddouble/vextenddouble/vextenddouble 1, (11) Euclideandistanceinthedecisionsystemweightspace /Delta1(s 1,s2)=/vextenddouble/vextenddouble/vextenddoublewds 1−wds 2/vextenddouble/vextenddouble/vextenddouble 2, (12) A.2.2Priors In this section, we define how the distances between strate- gies manifest into transitions between them. For this purpose, we consider two priors on the strategy transitions that usethe distances defined in Appendix A.2.1 . They are the grad- ual learning prior and the mixed prior. We describe each of them in turn. Graduallearningprior The gradual learning prior ( m gradual in Eq. 13) assumes that strategies change gradually (i.e transitions to strategies that are close-by in terms of thedistance metrics when compared to other strategies). P(S t+1=s|St,mgradual)=exp(−1 τ/Delta1(s, S t)) /summationtext s/prime∈Sexp(−1 τ/Delta1(s/prime,St)),(13) Mixed prior The mixed prior ( mmixed in Eq. 14) assumes that both insight-like ( minsight in Eq. 2) and gradual changes coexist. P(St+1=s|St,mmixed) =pgradualP(St+1=s|St,mgradual) +(1−pgradual)P(S t+1=s|St,mabrupt). (14) In Eqs. 13and14,τis the temperature parameter which balances how much the distance between the strategies effects the transition probabilities between them. We found that fitting τdid not improve our inferences. Therefore, we set its value to 1. We evaluated all the distance metrics defined in the Section 5to verify if the gradual and mixed priors with those distance metrics improved our inferences on the strategies.Model selection revealed that the model without a prior based on the strategy distances (i.e. insight-like transition prior) performed better than the model with the gradual andmixed transition priors.A.3Simulateddata To validate our computational microscope on simulated data, we generated data from 5 models: the random model (that selects one strategy at random in each of the trial), theinsight model which generates click sequences according to Eq. 2, the gradual model described in Eq. 13,t h em i x e d model described in Eq. 14and the RSSL model based on Lieder and Griffiths (2017). The first four models repeat the previous strategy with a probability of 50% and samples it from its model of strategy change otherwise. According toa mixed model, there is a 50% chance that such strategychanges will follow the gradual model ( 13) and a 50% chance that they will follow the insight model ( 2). The RSSL model treats the problem of deciding how to planas a 79-armed bandit with one arm for each strategy. It performs Bayesian inference on the expected return of each strategy and selects strategies via Thompson sampling. Ithas 79 ×2=158 free parameters that specify the prior mean and variance of each strategy’s expected return. These parameters of the model were fit using data from Lieder(2018b ). In all cases, the simulation of how the simulated strategies generate click sequences was also probabilistic. Concretely, the click sequences were generated according to the probabilistic soft-max model of the correspondingstrategy ( 1). For each strategy the temperature parameter ( τ) that determines the amount of randomness in the generation of the click sequences was estimated from human data. We found that for data generated from the random model, our computational microscope correctly inferred the true strategy for 76 ±0.007% of the trials and correctly predicted the strategy type for 91 ±0.005% of the trials. Similarly for the gradual learning model, our computational microscope could correctly predict the true strategy in 77 ±0.007% of the trials and the correct strategy in 91 ±0.004% of the trials. For data generated from the insight-like learning model, our method correctly predicted the ground-truth strategy in 88±0.005% of the trials and the ground-truth strategy type in 96±0.003% of the trials. For the data generated from the model which is assumes a combination of gradual learning and abrupt insights, our method correctly inferred the truestrategy in 82 ±0.006% of the trials and the true strategy type in 94 ±0.004% of the trials. For data generated from the RSSL model, our computational microscope was able to correctly infer the true strategy in 76 ±0.007% of the trials and the true strategy type in 90 ±0.005% of the trials. A.4Planningstrategies We have considered a total of 79 strategies clustered into 13 types of strategies. The clustering was done by applying Ward’s hierarchical clustering technique to the strategies’ BehaviorResearchMethods pairwise distances computed by the Symmetric Kullback Leibler Divergence between the probability distributionsof clicks induced by the strategies. Since the output of the clustering is a hierarchical partitioning of the set of strategies, we chose the level of hierarchy that gave us themost interpretable clusters. This section describes the strat- egy types and the strategies that belong to each type. Please note that the strategies are stochastic and the descriptiononly corresponds to the actual behavior most of the time. All of the strategies described below make the best use of the available information to make the final move. That is, thestrategies take the path that has the highest expected value. Goal-settingwithexhaustivebackwardplanning These strategies explore all the outcomes. They start by exploring the final outcomes and then plan backwards . They differ in when they initiate backward planning. The model-free values and heuristics decision system and thePavlovian decision system together contribute at least 75% on average to the strategies in this cluster. Strategy 1: Goal-setting This strategy starts by exploring the final outcomes in a random order and if a positive finaloutcome is found, it looks at the outcomes randomly along the path to the start node and this procedure is repeated until all outcomes are explored. Strategy2:Immediategoalsetting This strategy’s behavior is similar to strategy 1 but differs in the way it explores the outcomes on the path to the start node. Instead of exploring randomly on the path to the start node, it explores them levelby level. Strategy 3: Immediate goal setting with preference for siblings This strategy’s behavior is similar to that of strategy 2 except that after exploring the path until the startnode, it explores the sibling of the previously observed final outcome. Strategy 4: Immediate goal setting with preference for siblings and immediate ancestors This strategy’s behavior is similar to that of strategy 3 but more priority is given toimmediate ancestors when there are multiple ancestors. A.4.2MaximizingGoal-setting withexhaustivebackwardplanning The only strategy in this category first explores all final outcomes and then plans backwards from them in the orderof their value until it has explored all the outcomes. The model-free values and heuristics decision system and thePavlovian decision system together contribute at least 75% on average to the strategy in this cluster. Strategy 5: Maximizing goal-setting with exhaustive back- ward planning This strategy starts by exploring the final outcomes and then plans backwards in the decreasing order of values of the final outcomes. This strategy doesn’t termi- nate until it has observed all the outcomes. A.4.3Maximizinggoal-settingwithlimited backwardplanning These strategies focus their exploration on potential final outcomes and their termination depends on whether or not a high value has been observed. These strategies donot do backward planning, except for strategy 6, which isan approximation to the near-optimal goal-setting strategy for the increasing variance environment for the three step task with increasing variance defined in Section 5.T h e strategies differ in when they terminate planning, especially with respect to how much they continue exploring after uncovering sufficiently good information. The model-freevalues and heuristics decision system, the model-based metareasoning decision system and the Pavlovian decision system in combination contribute at least 75% on average tothe strategies in this cluster. Strategy6:Searchforthebestpossiblefinaloutcome This strategy starts with exploring the final outcomes in a randomorder and terminates clicking upon finding an outcome with value equal to the maximum observable value of the reward distribution. If such a node is not found, it explores all thefinal outcomes. If there are multiple final outcomes with the same highest observed value, the strategy might do backward planning along the paths from those outcomes.This strategy approximates the optimal strategy for the three-step task with increasing variance. Strategy7:Excessivegoal-setting This strategy starts with exploring the final outcomes in a random and explores one extra outcome after exploring an outcome that withvalue equal to the maximum observable value of the reward distribution. If such a node is not found, it explores all the final outcomes and terminates. Strategy8:Leaveoutonefinaloutcome This strategy ran- domly explores all final outcomes except for one randomly-selected final outcome. Strategy 9: Extra planning after exploring the second best value This strategy starts by exploring the final outcomes and terminates after exploring one extra final outcome after BehaviorResearchMethods having found an outcome whose value is greater than the second largest observable value of the reward distribution. Strategy 10: Explore as many final outcomes as there are initial outcomes This strategy explores as many final out- comes as there are immediate outcomes in the task structure. Strategy 11: One outcome per sub-tree of the start node This strategy explores one random outcome from each sub-tree of the start node. Strategy 12: Consecutive second maximum This strategy starts by exploring the final outcomes in a random orderand terminates after exploring two outcomes consecutively whose values are greater than the second largest value of the reward distribution. Strategy 13: Explore two extra outcomes after exploring a positive outcome This strategy starts with exploring final outcomes in a random order and terminates after exploring two extra final outcomes after exploring a positive finaloutcome. Strategy14:Immediateoutcomesafterfinaloutcomeswith satisficing This strategy explores all the final outcomes first and then explores all the immediate outcomes. Whileexploring the final outcomes, if it finds an outcome whose value is equal to the maximum observable value of the reward distribution, it terminates. Strategy15:Exploreparentsoflargestfinaloutcomes This strategy explores all the final outcomes and then explores the parents of the final outcomes with the largest value among the explored outcomes. A.4.4Frugalgoal-settingstrategies These strategies focus their exploration on potential finaloutcomes but explore less overall. They differ in the way they terminate planning. The model-free values and heuris- tics decision system, the model-based metareasoning deci-sion system and the Pavlovian decision system in combina- tion contribute at least 75% on average to the strategies in this cluster. Strategy 16: Goal-setting with backward planning This strategy starts by exploring the final outcomes. It explores the final outcomes until a final outcome with a value equal to the maximum observable value of the reward distributionis explored, plans backwards to the corresponding immedi- ate outcome and then terminates. If such an outcome is not found, it terminates after exploring all final outcomes.Strategy 17: Goal-setting with positive satisficing This strategy starts with exploring final outcomes and terminatesafter exploring an outcome whose value is positive. If such an outcome is not found, it explores all the final outcomes a n dt h e nt e r m i n a t e s . Strategy18:Onefinaloutcome This strategy explores one random final outcome and terminates. Strategy19:Goalsettingwithforwardplanning: This strat- egy starts by exploring the final outcomes and after findingan outcome with value equal to the maximum observablevalue of the reward distribution, it explores the path from the corresponding immediate outcome to that final outcome a n dt h e nt e r m i n a t e s . Strategy20:Exploreonesub-tree This strategy explores all the outcomes of one random sub-tree of the start node. Strategy21:Exploreparentofthebestfinaloutcome This strategy explores all the final outcomes until it finds a finaloutcomes whose value is equal to the maximum observablevalue of the reward distribution and then explores the parent of that outcome. Strategy 22: Explore one path This strategy explores one random path from an immediate outcome to a final outcomea n dt h e nt e r m i n a t e s . Strategy23:Twofinaloutcomes This strategy explores two randomly chosen final outcomes and then terminates. Strategy 24: Explore the parent of a positive final outcome This strategy starts by exploring the final outcomes in a random order and upon finding a final outcome with a positive value, it explores the parent of that outcome andterminates. If no final outcome with a positive value is found, it explores all final outcomes and then terminates. Strategy25:Exploreallfinaloutcomesofarandomlychosen sub-treeandtheparentofarandomlychosenobservedfinaloutcome This strategy explores all the final outcomes of a randomly chosen sub-tree of the start node and then explores the parent of a randomly chosen final outcome from the setof observed final outcomes. A.4.5Strategythatexploresimmediateoutcomes onthepathstothebestfinaloutcomes The only strategy in this category explores all the final out- comes and then explores the immediate outcomes of the best BehaviorResearchMethods among them. The model-free values and heuristics deci- sion system and the satisficing and stopping decision systemtogether contribute at least 75% on average to the strategies in this cluster. Strategy 26: Explore immediate outcomes on the paths to the best final outcomes This strategy starts by exploring all the final outcomes and then explores the immediateoutcomes of paths that lead to the best final outcomes. A.4.6Strategythatexploresimmediaterewardson thepathstothebestfinaloutcomeswithsatisficing The only strategy in this cluster behaves similarly to the strategy in the previous category but differs in the fact that it stops exploring the final outcomes after finding an outcomewhose value is equal to the maximum observable value ofthe reward distribution and then explores the immediate outcome of that node. The model-free values and heuristics decision system, the model-based metareasoning decisionsystem and the Pavlovian decision system in combination contribute at least 75% on average to the strategy in this cluster. Strategy 27: Explore immediate outcomes on the paths to the best final outcomes with satisficing This strategy explores all the final outcomes randomly until it finds a finaloutcome whose value is equal to the maximum observable value of the reward distribution and then explores the corresponding immediate outcome of that final outcome. A.4.7Forwardplanningstrategiessimilar toBreadthFirstSearch These strategies perform planning similar to Breadth First Search, i.e., they first inspect all outcomes at the first level, before inspecting all outcomes at the second level, and so on. These strategies differ in the order in whichoutcomes at the same level are explored. The strategy 30 in this category is a satisficing version of breadth-first search which terminates upon finding a high value. The model-based metareasoning decision system alone contributesmore than 75% on average to the strategies in this cluster. Strategy28:RandomizedBreadthFirstSearch This strategy explores outcomes level by level, that is, it randomly explores the outcomes that are one step away, then randomly exploring the outcomes that are two steps away and so onuntil all nodes are clicked.Strategy 29: Breadth First Search This strategy behaves similar to strategy 5 except that sibling outcomes areobserved consecutively. Strategy 30: Satisficing Breadth First Search This strategy explores outcome in the breadth-first search order and terminates upon finding the maximum value of the reward distribution. A.4.8Middle-outplanning The only strategy in this category explores the center out-comes first, then inspects immediate outcomes and finally inspects final outcomes. The model-free values and heuris- tics decision system and the model-based metareasoningdecision system in combination contribute at least 75% onaverage to the strategy in this cluster. Strategy31:Middle-outplanning This strategy explores the center nodes first, then explores the immediate outcomes and then the final outcomes. A.4.9ForwardplanningstrategiessimilartoBest FirstSearch These strategies are similar to the Best First Search plan- ning strategy. They differ in how they start and how theyterminate. The model-free values and heuristics decision sys- tem and the Pavlovian decision system together contribute at least 75% on average to the strategies in this cluster. Strategy 32: Non-terminating Best First Search This strat- egy starts by exploring the immediate outcomes andexplores an unobserved child of the observed outcome withthe highest value. If no outcome is observed, it chooses the immediate outcome randomly. Strategy 33: Best First search after exploring all immediate outcomes This strategy explores all immediate outcomes first and then follows the best-first strategy (Strategy 32) toexplore outcomes further. Strategy 34: Satisficing Best First Search after exploring all immediateoutcomes This strategy is similar to strategy 33 but it stops exploring when an outcome with value equal to the maximum observable value of the rewards distribution is observed. Strategy 35: Explore immediate outcomes and then sub- trees This strategy explores all immediate outcomes in a random order and then explores all the outcomes of their sub-trees, exploring each sub-tree in a random order. BehaviorResearchMethods Strategy 36: Explore sub-trees of positive immediate out- comes This strategy explores all the immediate outcomes and then explores complete sub-trees of the immediate outcomes with a positive value in a random order. A.4.10Localsearchstrategies These strategies focus on information about the sub-trees and next/previous steps along the paths that have receivedthe most consideration so far. These strategies differ in whether they prioritize sub-trees or paths and whether earlier versus later outcomes have already been observed.The model-free values and heuristics decision system and the satisficing and stopping decision system together contribute at least 75% on average to the strategies in thiscluster. Strategy37:ProgressiveDeepening This strategy is similar to Depth First Search (i.e., it starts with exploring the nodeslevel by level, first observing the node and then its children)but instead of choosing a sibling of a final outcome, it chooses to explore a path starting from the immediate outcome again. Strategy 38: Priority to explored ancestors This strategy randomly selects the first outcome to explore. Based on the outcomes explored, this strategy prioritizes exploringoutcomes that have a larger number of observed ancestors than the number of explored successors. Strategy 39: Priority to explored successors This strategy randomly selects the first outcome to explore. Based on the outcomes explored, this strategy prioritizes exploring outcomes that have larger number of observed successorsthan the number of explored ancestors. Strategy40:Prioritytoexploredimmediateancestors This strategy is similar in behavior to strategy 38 but an outcomeis given higher priority if it has higher number of immediateancestors than immediate successors. If the number of immediate ancestors and ancestors and successors is equal, then it prioritizes total number of ancestors over totalnumber of successors. Strategy41:Prioritytoexploredimmediatesuccessors This strategy is similar in behavior to strategy 39 but an outcomeis given higher priority if it has higher number of immediate successors than immediate ancestors. If the number of immediate ancestors and ancestors and successors is equal,then it prioritizes total number of ancestors over totalnumber of successors.Strategy 42: Satisficing Depth First Search This strategy’s behavior is similar to that of Depth First Search but itterminates upon finding an outcome with value equal to the maximum observable value of the reward distribution. Strategy43:Leaveoutonesub-tree This strategy explores all sub-trees of the start node except one in a random order, exploring all the outcomes of a sub-tree in a randomorder and then moving on to the next, while exploring the outcomes in each sub-tree also in a random order. Strategy44:Exploreallsub-trees This strategy explores all sub-trees of the start node in a random order, exploring all the outcomes of a sub-tree in a random order and thenmoving on to the next, while exploring the outcomes in each sub-tree also in a random order. Strategy 45: Explore all sub-trees with satisficing This strategy’s behavior is similar to that of strategy 44 but it terminates planning upon finding an outcome that has value equal to the maximum observable value of the rewarddistribution. Strategy 46: One complete sub-tree and final outcomes of othersub-trees This strategy explores one random sub-tree of the start node and then explores final outcomes of the other sub-trees. Strategy47:Twocompletesub-treesandfinaloutcomesof thelastsub-tree This strategy explores all the outcomes of two sub-trees randomly, exploring one after the other and then explores the final outcomes of the other sub-trees. Strategy48:Exploreallsub-treesuntilthemaximumvalue of the reward distribution is found and then explore the center outcome of an unobserved immediate outcomeThis strategy explores the outcomes of sub-trees of the startnode in a random order and if it finds a final outcome with a value equal to the maximum observable value of the reward distribution, it explores the center outcome on the path fromthe final outcome to the corresponding immediate outcome a n dt h e nt e r m i n a t e s . A.4.11Frugalplanningstrategies These strategies explore very little or not at all. They differ in which outcomes they inspect and when they terminate.The model-free values and heuristics decision system and the mental effort avoidance decision system together con- tribute at least 75% on average to the strategies in this clus-ter. BehaviorResearchMethods Strategy 49: Myopic Impulsive This strategy explores one randomly chosen immediate outcome and then terminates. Strategy50:Noplanning This strategy does not plan at all (i.e does not explore any outcomes). Strategy 51: Explore immediate outcomes and final out- comes with satisficing on a positive value This strategy explores all the immediate outcomes until an outcome witha positive value is found and then explores the final out- comes reachable from that immediate outcome and explores them until a final outcome with positive value is found. If itdoesn’t find an immediate outcome with a positive value, it terminates. Strategy 52: Explore one center outcome This strategy chooses a random path and then explores the center outcomeof that path. A.4.12Myopicplanningstrategies These strategies start with exploring immediate outcomes and then explore the sub-trees of the best immediate out- comes. They differ in how many immediate outcomes theyexplore, which nodes in the sub-tree they explore and when they terminate. The model-free values and heuristics decision system, the model-based metareasoning decisionsystem and the Pavlovian decision system in combinationcontribute at least 75% on average to the strategies in this cluster. Strategy53:Exploreallimmediateoutcomeswithsatisficing This strategy starts with exploring the immediate outcomesand terminates upon finding an immediate outcome which has a positive value. If an immediate outcome with a positive value is not found, it terminates after exploring allimmediate outcomes. Strategy54:Exploreallimmediateoutcomes This strategy explores all immediate outcomes and then terminates. Strategy 55: Pruning of nodes with immediate negative rewards and choosing actions with best long-term conse-quences This strategy first explores all the immediate out- comes and then for immediate outcomes with a positive value, it explores the corresponding final outcomes such thatsibling outcomes are explored consecutively. Strategy 56: Explore positive immediate outcomes and final outcomes with satisficing This strategy’s behavior is similar to strategy 55 but instead of observing all the finaloutcomes, the strategy terminates after finding the outcome with a value equal to the maximum observable value of thereward distribution. Strategy57:Leaveoutoneimmediateoutcome This strat- egy explores all immediate outcomes except for one. Theleft-out immediate outcome is randomly selected. Strategy 58: Explore immediate and final outcomes with satisficing on finding a large value This strategy starts by exploring immediate outcomes. It first explores an immediate outcome and then explores the final outcomes of the corresponding immediate outcome. If it finds anoutcome with a value that is equal the maximum observable value of the reward distribution while exploring the final outcomes, it terminates. Strategy 59: Explore immediate and final outcomes with positivesatisficing This strategy starts by exploring imme- diate outcomes. It first explores an immediate outcome andif it has a positive value, it explores the final outcomes ofthe corresponding immediate outcome and this pattern is repeated. If it finds an outcome with a positive value while exploring the final outcomes, it terminates. Strategy 60: Explore the sub-tree which contains largest finaloutcome This strategy explores all the final outcomes in a random order and then explores all the outcomes ofsub-trees which contain the final outcome with the largest value. Strategy 61: Explore the immediate children of the best immediate outcome This strategy explores all the imme- diate outcomes and then explores a single child of the immediate outcome with the largest value. Strategy 62: Explore final outcomes with preference for nodes in the same sub-tree of the root This strategy explores the final outcomes, exploring all the final outcomesof one sub-tree before moving on to the next, and terminateswhen it finds an outcome whose value is equal to the maxi- mum observed value of the reward distribution. A.4.13Othermiscellaneousstrategies These strategies do not fit the definition of any of the above categories and appear to have little in common. The model-free values and heuristics decision system, the model-based metareasoning decision system and the satisficing and stopping decision system in combination contribute at least75% on average to the strategies in this cluster. BehaviorResearchMethods Strategy63:InverserandomizedBreadthFirstSearch This strategy explores all the outcomes level by level, exploringthe farthest ones and moving on to the closer ones, that is, exploring the outcomes that are three steps away (the farthest nodes), then exploring outcomes thatare two steps away and so on until all outcomes are explored. Strategy64:Exploreimmediateoutcomesoffinaloutcomes This strategy observes all the final outcomes first and then, in the decreasing order of the outcomes values,observes the immediate outcomes. Strategy65:Aversionofgoal-settingthatchoosesbetween equally-good goals based on the immediate reward This strategy explores all the final outcomes first and then compares paths of the final outcomes with the largest values level by level from the final outcomes to the immediateoutcomes. Strategy 66: Goal-setting with comparison of equivalent goals This strategy’s behavior is similar to that of strategy 65 but the outcomes on the paths to the immediate outcome are explored in a random order. Strategy 67: Best Final Outcome This strategy explores all final outcomes in a random order and then terminates. Strategy 68: Random planning This strategy explores outcomes in such a way that there is an equal probability of exploring a given outcome and terminating planning. Strategy 69: Explore immediate outcomes of second best nodes This strategy first explores all the final outcomes and then explores immediate outcomes of the final outcomes with second-largest value. Strategy 70: Explore immediate outcomes and final out- comes This strategy first explores all the immediate out- comes in a random order and then explores all the final outcomes in a random order. Strategy 71: Explore immediate outcomes and final out- comes with termination This strategy first explores all immediate outcomes in a random order and then explores the final outcomes outcomes in a random order. Whileexploring the final outcomes, if an outcome whose valueis equal to the maximum observed value of the reward distribution is found, it terminates.Strategy72:Allimmediateoutcomesafterallfinaloutcomes This strategy explores all the final outcomes first and thenexplores all the immediate outcomes. Strategy 73: Explore immmediate, final and center out- comes This strategy first explores all the immediate out- comes, then explores all the final outcomes and thenexplores all the center outcomes. Strategy 74: Explore all center outcomes This strategy explores all center outcomes and then terminates. Strategy 75: Explore the path to the final outcome with largest value and satisficing This strategy explores final outcomes until it finds a final outcome whose value isequal to the maximum observable value of the rewarddistribution and then explores the outcomes on the path from that outcome to the corresponding immediate outcome in a random order. After exploring the immediate outcome, itterminates. Strategy 76: Explore center outcomes and then final out- comes This strategy explores all the center outcomes in a random order and then explores all the final outcomes in arandom order. Strategy 77: Explore center outcomes and one of their children This strategy explores one center outcome and then explores one of its randomly-chosen children and then repeats this process until all center nodes are explored. Strategy78:Explorefinaloutcomesandtheirparents This strategy first explores final outcomes of a sub-tree of thestart node and then explores the parent of the explored final outcomes and then repeats this process for all of the sub-trees. Strategy 79: Explore final outcomes and their parents with satisficing This strategy’s behavior is similar to that of strategy 78 but it terminates when it finds a final outcome whose value is equal to the maximum observable value ofthe reward distribution. A.4.14Identifiabilityandconfidence To estimate how accurately and how confidently individualstrategies can be distinguished based on a single click sequence, we compared how probable the click sequence generated by one strategy is under other strategies comparedto its likelihood under the true strategy. Our procedure was as follows: For each strategy described in Section 5,w e BehaviorResearchMethods generated 1000 click sequences by applying the strategy to 1000 different instances of a given environment. Then,for each click sequence d(one simulation), we evaluated whether our method correctly inferred the strategy that generated it and computed the likelihoods with which eachof the 79 strategies would generate that click sequence (i.e., P(d|s 1),P(d|s2),···,P(d|s79)). We then compute the relative likelihood of the click sequence under each strategyby dividing the likelihood of the click sequence under that strategy by the maximum likelihood for that click sequence under all the strategies (i.e., P(d|s1) maxiP(d|si),P(d|s2) maxiP(d|si),e t c . ) .T o get a representative statistic of how likely click sequencesgenerated by one strategy are under other strategies, we compute the average of the relative likelihoods obtained for the 1000 simulations (i.e., ρ j,k=1 1000·/summationtext1000 t=1P(dj,t|sk) maxiP(dj,t|si) where jis the strategy that generated the click sequence andkis the strategy whose average relative likelihood is being evaluated). In addition, we estimated how confident our method is in each of its inferences by computingthe ratio of likelihood of the inferred strategy over thelikelihood of the second most likely strategy (LR 1,2). We ran this procedure for two environments: the 3-step increasing variance environment and the 5-step transfer task. The results of this evaluation for the three step envi- ronment and the five step environment are summarized in Tables 6and 7, respectively. The first column reports the strategy that generated the data. The second column reports our method’s typical confidence in its inferences in terms of the median of the 1000 LR 1,2ratios. The third column reports how often the strategy that our method inferredwas identical to the strategy that had generated the data. The fourth column lists other strategies that our method considers to be possible alternative explanations becausethey are at least 66% as likely as the true strategy at least half of the time. The last five columns show the top 5 average relative likelihood ratios for click sequences generated froma given strategy for the 3-step environment with increasing variance and the transfer task respectively. That is, for the strategy in row j, the entries in the columns labelled “1”, “2”,···, “5” are the values of ρ j,k1,ρj,k2,···,ρj,k5for the strategy with the highest, second highest, ···, tenth highest average likelihood ratio for the click sequences generated bystrategy j, respectively. In each row, the average likelihood ratio of the true strategy is highlighted in bold. As you can see from the position of the bolded average likelihood ratios in Table 6, the true strategy was always the most likely explanation, on average. Furthermore, forall strategies except for one, our method’s inferences were correct most of the time. Concretely, as shown in the third column, the proportion of correct inferences rangedfrom 64.3% to 100% with an average of 91.8% with theexception of Strategy 13, for which the proportion of correct inferences was only 14.6%.Except for Strategy 13, all strategies also had acceptable average likelihood ratios of at least 0.720. Strategy 13,which explores two extra final outcomes after uncovering a positive final outcome, had an average likelihood ratio of only 0.410. This strategy was most often confused withStrategy 9, which explores one more final outcome after uncovering a final outcome with a value greater than or equal to the second largest value of the reward distribution.To see if these strategies are distinguishable on other environments, we performed the same analysis on the 5- step transfer task. The results of this analysis are reportedin Table 7. We see that Strategy 13 is identifiable on the transfer task while all other strategies remain identifiable in terms of having high relative likelihoods on average. The LR 1,2ratio in the second column of Tables 6and 7shows how much more likely the inferred strategy tends to be compared the second most likely strategy. And the fourth column (“Similar Strategies”) shows what the secondmost likely strategies tend to be. We set our criterion for what it means for the computational microscope to be highly confident about the inferred strategy to LR 1,2=1.5 meaning that the inferred strategy is at least 1.5 times aslikely as the second most likely explanation. We found that on the 3-step increasing variance environment, this was the case for 82% of the strategies and on the transfer taskthis proportion was 80%, and the proportion of strategies for which this was the case for at least one of the two environments was 92%. The only exceptions are Strategies6, 14, 17, 36, 67 and 69. Those strategies jointly accounted for 44% of all human click sequences. This is primarily because this set includes the optimal strategy and manystrategies that are very similar to it. Yet, even though thosestrategies are very similar, the third columns of Tables 6and 7show that our computational microscope can nevertheless correctly identify them most of the time. This suggests thatour computational microscope is usually able to confidently infer which strategy best explains a given click sequence. We also investigated how many other alternative answers there are when the inferred strategy is less than 1.5 times as likely as the most likely alternative. In the 3-step increasing variance environment, 86% of the inferences thatour method was not highly confident about (i.e., LR 1,2< 1.5) had only one alternative explanation, 7% had 2 possible alternative explanations, and 7% had 3 possible alternative explanations. Thus the median number of alternativeexplanations for inferences that our method is not highly confident about was only 1. In 57% of the cases the alternative explanations were other strategies of the sametype as the inferred strategy and in the other 43% of the cases there was at most one alternative strategy type. For the 5-step increasing variance environment, the mediannumber of alternative explanations for such inferences was3; in 21.5% of the cases there was only 1 alternative BehaviorResearchMethods Table 6 Summary of the likelihood of click sequences on the 3-step increasing variance environment Strategy LR 1,2 True proportion Similar strategies 12345 11 . 0 ·1060.866 None 0.885 0.105 0.039 0.039 0.026 2 7.7 0.822 None 0.866 0.311 0.311 0.208 0.109 3 1.0 0.985 4 0.988 0.988 0.303 0.106 0.038 4 1.0 0.993 3 0.991 0.991 0.303 0.092 0.038 5 2.3 0.932 None 0.955 0.224 0.213 0.099 0.099 6 1.0 0.751 14,17,18,79 0.829 0.729 0.471 0.294 0.256 7 1.1 0.643 67 0.723 0.269 0.265 0.244 0.185 8 3.4 0.976 None 0.982 0.281 0.133 0.087 0.073 9 1.8 0.652 None 0.754 0.447 0.141 0.126 0.117 10 2.5 0.598 None 0.755 0.372 0.136 0.101 0.093 11 2.5 1.0 None 1.000 0.400 0.068 0.059 0.059 12 1.8 0.773 None 0.835 0.451 0.108 0.101 0.096 13 1.6 0.149 None 0.410 0.291 0.272 0.229 0.175 14 1.0 0.887 6,17,18,79 0.920 0.735 0.474 0.284 0.250 15 7.2 ·1041.0 None 1.000 0.070 0.046 0.013 0.000 16 2860.0 0.989 None 0.991 0.229 0.176 0.176 0.176 17 1.0 0.914 6,14,18,23,79 0.940 0.481 0.470 0.463 0.265 18 26 1.0 None 1.000 0.485 0.222 0.222 0.222 19 2860.0 0.991 None 0.992 0.177 0.177 0.177 0.177 20 4.3 0.949 None 0.958 0.176 0.150 0.101 0.072 21 1.0 0.933 24 0.946 0.451 0.174 0.174 0.174 22 2860.0 1.0 None 1.000 0.241 0.018 0.000 0.000 23 2.4 0.928 None 0.947 0.229 0.170 0.158 0.158 24 5.0 0.953 None 0.962 0.456 0.046 0.014 0.010 25 2860.0 1.0 None 1.000 0.050 0.035 0.000 0.000 26 7.2 ·1041.0 None 1.000 0.174 0.103 0.059 0.025 27 572.0 0.989 None 0.991 0.176 0.176 0.176 0.176 28 2.4 ·1050.935 None 0.939 0.236 0.062 0.003 0.001 29 15.0 0.976 None 0.982 0.065 0.028 0.016 0.009 30 2.4 ·1050.968 None 0.971 0.222 0.020 0.016 0.001 31 2.4 ·1051.0 None 1.000 0.000 0.000 0.000 0.000 32 3.2 ·1070.999 None 0.998 0.247 0.067 0.011 0.007 33 18.0 0.997 None 0.995 0.241 0.238 0.028 0.004 34 1.0 ·1051.0 None 0.996 0.239 0.057 0.035 0.013 35 8.0 ·1050.999 60 0.999 0.112 0.001 0.001 0.000 36 1.0 0.986 None 0.988 0.373 0.132 0.132 0.132 37 9.3 0.994 None 0.994 0.015 0.015 0.005 0.001 38 1.2 0.814 None 0.863 0.201 0.136 0.055 0.017 39 1.4 0.825 None 0.870 0.187 0.139 0.053 0.023 40 6.0 0.997 None 0.995 0.196 0.012 0.010 0.008 41 6.0 0.985 None 0.984 0.207 0.015 0.015 0.012 42 50.8 1.0 None 1.000 0.226 0.032 0.015 0.015 43 3.1 0.627 None 0.720 0.220 0.161 0.161 0.146 44 2.9 0.801 None 0.836 0.180 0.165 0.150 0.146 45 2.6 0.677 None 0.765 0.164 0.164 0.157 0.152 46 548.7 0.729 None 0.789 0.168 0.156 0.066 0.052 47 245.0 0.907 None 0.934 0.163 0.031 0.030 0.023 48 5.7 0.693 None 0.777 0.227 0.166 0.111 0.083 49 1.0 1.0 53 1.000 0.502 0.019 0.000 0.000 BehaviorResearchMethods Table 6 (continued) Strategy LR 1,2 True proportion Similar strategies 12345 50 13.0 1.0 None 1.000 0.077 0.000 0.000 0.000 51 2860.0 1.0 None 1.000 0.242 0.219 0.132 0.132 52 52.0 1.0 None 1.000 0.019 0.000 0.000 0.000 53 1.0 1.0 49, 54, 57 1.000 0.488 0.262 0.250 0.132 54 2860.0 1.0 None 1.000 0.254 0.132 0.132 0.132 55 1.0 1.0 51, 56 1.000 0.614 0.255 0.132 0.132 56 1.0 1.0 51, 55 1.000 0.610 0.226 0.143 0.132 57 286.0 1.0 None 1.000 0.239 0.003 0.000 0.000 58 2.4 0.899 None 0.912 0.425 0.103 0.056 0.044 59 4.0 0.898 None 0.931 0.433 0.066 0.057 0.022 60 1.2 ·1050.964 None 0.975 0.374 0.039 0.000 0.000 61 2.6 ·1041.0 None 1.000 0.000 0.000 0.000 0.000 62 2.9 1.0 None 1.000 0.087 0.042 0.036 0.014 63 2.4 ·1050.994 None 0.990 0.093 0.004 0.002 0.000 64 1.5 ·1040.963 None 0.984 0.218 0.018 0.018 0.018 65 2.0 0.989 None 0.987 0.302 0.088 0.073 0.073 66 275.3 0.651 7, 69 0.795 0.328 0.032 0.032 0.019 67 1.0 0.965 None 0.970 0.806 0.274 0.238 0.174 68 6.8 ·10290.786 None 0.795 0.084 0.037 0.029 0.026 69 1.0 0.965 67 0.973 0.799 0.174 0.160 0.091 70 2.4 ·1050.985 None 0.986 0.255 0.020 0.007 0.003 71 20.6 0.859 None 0.912 0.244 0.143 0.102 0.048 72 2.4 ·1051.0 None 1.000 0.185 0.103 0.023 0.000 73 2.4 ·1051.0 None 1.000 0.000 0.000 0.000 0.000 74 2860.0 1.0 None 1.000 0.000 0.000 0.000 0.000 75 3872.8 0.984 None 0.988 0.237 0.177 0.177 0.177 76 2.4 ·1051.0 None 1.000 0.000 0.000 0.000 0.000 77 1.8 ·1051.0 None 1.000 0.000 0.000 0.000 0.000 78 2.2 ·1071.0 None 1.000 0.185 0.000 0.000 0.000 79 2.9 1.0 None 1.000 0.283 0.283 0.271 0.245 This table summarizes likelihood of click sequences generated using a given strategy and the average relative likelihoods of the generated click sequences under the top 5 strategies on the 3-step increasing variance environment. Each row describes the results for a given strategy. The columnLR describes the median ratio of the likelihoods of the first-best strategy to the second-best strategy computed for each simulation separately. The columns 1-5 describe the decreasing order of the average relative likelihoods under the top 5 strategies The numbers in bold represent the average likelihood ratios of the strategy that the simulations were generated from. The column “True proportion” describes the proportion of clicksequences for which the true strategy was the most likely strategy. The column “Similar strategies” describes the strategies that the strategy wasconfused with on more than 20% of the click sequences explanation, in 21.5% of the cases there were 2 alternative explanations, and in 57% of the cases there were 3 alternative explanations. In 29% of the cases all alternative strategies were of the same type as the inferred strategy;in 42% of the cases they included one additional strategytype, and in 29% of the cases they included two or more alternative strategy types. Overall, our findings suggest that most of the strategies are highly identifiable in at least one of the environments and that even when our method is uncertain about the strat- egy, there is only a small number of alternative explanationsand those alternative explanations often instantiate the same strategy type. A.6Features The strategies described in section 1 were implementedusing the 51 features described below. The features aregrouped using the decision-making factor they represent.These features are defined in terms of the nodes in the Mouselab-MDP paradigm. All the features take a belief BehaviorResearchMethods Table 7 Summary of the likelihood of click sequences on the 5-step transfer task Strategy LR 1,2 True proportion Similar strategies 12345 14 . 0 ·10311.0 None 1.000 0.000 0.000 0.000 0.000 24 . 6 ·10221.0 None 1.000 0.000 0.000 0.000 0.000 39 . 8 ·10300.994 None 0.994 0.019 0.000 0.000 0.000 41 . 2 ·10291.0 None 1.000 0.017 0.000 0.000 0.000 51 . 1 ·10221.0 None 1.000 0.000 0.000 0.000 0.000 6 1.0 0.987 14,17,18,19,27 0.989 0.848 0.621 0.530 0.333 76 . 7 ·1060.982 None 0.982 0.088 0.085 0.064 0.064 81 . 7 ·10131.0 None 1.000 0.000 0.000 0.000 0.000 91 . 7 ·1080.996 None 0.997 0.002 0.002 0.001 0.000 10 249.3 0.999 None 0.999 0.047 0.047 0.043 0.042 11 1.9 ·1041.0 None 1.000 0.003 0.003 0.002 0.002 12 91.0 0.997 None 0.997 0.048 0.011 0.005 0.000 13 1.9 ·1040.936 None 0.939 0.024 0.014 0.013 0.012 14 1.0 0.982 6,17,19,27 0.986 0.664 0.646 0.509 0.225 15 1.0 0.991 66,67,69,75 0.994 0.523 0.521 0.512 0.363 16 1.1 ·10150.996 None 0.996 0.007 0.000 0.000 0.000 17 1.0 0.993 6,14,18,19,23,24 0.994 0.801 0.727 0.320 0.295 18 1.0 0.992 6,17,24 0.994 0.728 0.613 0.209 0.196 19 1.0 0.995 6,14,17,27 0.995 0.857 0.551 0.500 0.269 20 753.6 0.999 None 0.998 0.046 0.005 0.001 0.000 21 9.6 ·10101.0 None 1.000 0.003 0.000 0.000 0.000 22 1.7 ·10131.0 None 1.000 0.000 0.000 0.000 0.000 23 105.1 0.99 None 0.991 0.276 0.222 0.201 0.164 24 1.0 0.985 17,18,23 0.987 0.807 0.585 0.217 0.129 25 2.0 ·10101.0 None 1.000 0.001 0.000 0.000 0.000 26 1.6 ·10101.0 None 1.000 0.106 0.093 0.013 0.004 27 1.0 0.994 6,14,19 0.995 0.662 0.531 0.531 0.155 28 9.4 ·10151.0 None 1.000 0.000 0.000 0.000 0.000 29 5.6 ·10101.0 None 1.000 0.000 0.000 0.000 0.000 30 6.3 ·1061.0 None 1.000 0.000 0.000 0.000 0.000 31 3.0 ·10171.0 None 1.000 0.000 0.000 0.000 0.000 32 1.1 ·10311.0 None 1.000 0.000 0.000 0.000 0.000 33 1.4 0.979 None 0.936 0.578 0.000 0.000 0.000 34 1.4 0.949 None 0.958 0.575 0.000 0.000 0.000 35 9.6 ·10281.0 None 1.000 0.000 0.000 0.000 0.000 36 1.0 1.0 54 1.000 1.000 0.128 0.128 0.000 37 4.1 ·10221.0 None 1.000 0.000 0.000 0.000 0.000 38 1.8 ·10191.0 None 1.000 0.000 0.000 0.000 0.000 39 2.1 ·10251.0 None 1.000 0.000 0.000 0.000 0.000 40 3.4 ·10241.0 None 1.000 0.000 0.000 0.000 0.000 41 4.4 ·10290.997 None 0.997 0.003 0.000 0.000 0.000 42 8.4 ·1061.0 None 1.000 0.001 0.000 0.000 0.000 43 1047.9 0.971 None 0.978 0.050 0.010 0.006 0.005 44 379.7 0.859 None 0.899 0.303 0.008 0.002 0.000 45 6.1 0.906 None 0.938 0.289 0.022 0.022 0.019 46 2.3 ·10110.934 None 0.949 0.055 0.047 0.007 0.002 47 2.9 ·1060.989 None 0.990 0.016 0.011 0.009 0.008 48 3.5 ·10241.0 None 1.000 0.000 0.000 0.000 0.000 49 376.0 1.0 None 0.982 0.251 0.000 0.000 0.000 BehaviorResearchMethods Table 7 (continued) Strategy LR 1,2 True proportion Similar strategies 12345 50 34.0 1.0 None 1.000 0.029 0.000 0.000 0.000 51 68.5 0.997 None 0.997 0.422 0.026 0.013 0.008 52 2.2 ·1071.0 None 1.000 0.000 0.000 0.000 0.000 53 2.9 ·1060.985 None 0.963 0.246 0.015 0.000 0.000 54 1.0 1.0 36 1.000 1.000 0.128 0.128 0.000 55 1.8 ·10211.0 None 1.000 0.128 0.128 0.128 0.000 56 3.4 ·1090.995 None 0.997 0.128 0.128 0.128 0.004 57 597.8 1.0 None 1.000 0.000 0.000 0.000 0.000 58 1.0 ·1040.896 None 0.918 0.166 0.031 0.018 0.004 59 25.7 0.974 None 0.980 0.158 0.006 0.005 0.002 60 8.6 ·10121.0 None 1.000 0.002 0.000 0.000 0.000 61 4.6 ·10101.0 None 1.000 0.000 0.000 0.000 0.000 62 43.0 0.994 None 0.995 0.425 0.027 0.005 0.001 63 3.0 ·10171.0 None 1.000 0.000 0.000 0.000 0.000 64 1.3 ·10221.0 None 1.000 0.000 0.000 0.000 0.000 65 1.6 ·10140.999 None 0.999 0.013 0.004 0.001 0.001 66 1.0 0.966 15,67,69,75 0.973 0.528 0.518 0.513 0.353 67 1.0 0.996 15,66,69,75 0.997 0.697 0.652 0.522 0.512 68 9.3 ·10920.943 None 0.946 0.025 0.015 0.014 0.012 69 1.0 0.991 15,66,67,75 0.993 0.699 0.361 0.352 0.352 70 5.3 ·1060.999 None 0.999 0.001 0.000 0.000 0.000 71 1.4 ·10100.995 None 0.996 0.004 0.002 0.002 0.002 72 1.1 ·10161.0 None 1.000 0.001 0.000 0.000 0.000 73 3.0 ·10171.0 None 1.000 0.000 0.000 0.000 0.000 74 1.1 ·10111.0 None 1.000 0.000 0.000 0.000 0.000 75 1.0 0.997 15,66,67,69 0.997 0.651 0.516 0.511 0.351 76 6.7 ·10161.0 None 1.000 0.000 0.000 0.000 0.000 77 5.6 ·10151.0 None 1.000 0.000 0.000 0.000 0.000 78 5.3 ·10271.0 None 1.000 0.000 0.000 0.000 0.000 79 5.2 ·10141.0 None 1.000 0.076 0.076 0.072 0.072 This table summarizes the likelihood of click sequences generated using a given strategy and the average relative likelihoods of the generated click sequences under the top 5 strategies on the 5-step transfer task state, computation pair as input. We define the considered node to be the entity whose value is found out after clicking.The value of all the features is 0 for termination unlessotherwise specified. A.6.1Mentaleffortavoidance Feature 1: “Termination Constant”: The value of this feature is 1 for all clicks and 0 for the termination operationin all belief states.A.6.2Model-basedmetareasoningfeatures These features capture uncertainty about the values of theunobserved nodes. Uncertainty is defined as the standard deviation of the values of the distribution. The following features capture uncertainty:Feature 2: “Uncertainty” : The value of this feature for a click in a given belief state is the uncertainty in the value of the considered node.Feature 3: “Max Uncertainty” : The value of this feature for a click in a given belief state is the is the maximum BehaviorResearchMethods uncertainty in return for the current trial from all the paths that the considered node lies on.Feature 4: “Successor Uncertainty” :T h ev a l u eo f this feature for a click in a given belief state is the total uncertainty in the values of all the successors ofthe considered node on the current trial. Feature 5: “Trial level standard deviation” :T h ev a l u eo f this feature for a click is the uncertainty in the value of theconsidered node as estimated across all trials attempted so far by the agent. Feature 6: “Current trial level standard deviation” :T h e value of this feature for a click in a given belief state is theuncertainty in the value of nodes at the same depth as the considered node as estimated for the current trial. Feature 7: “Does the node lie on the second mostpromising path?” : The value of this feature for a click in a given belief state is 1 if the considered node lies on the path which has the second highest expected return for the currenttrial, and 0 otherwise. A.6.3PavlovianFeatures These features are based on greedy maximization. Pavlovianbehavior is captured by the following features:Feature 8: “Best expected value” : The value of this feature for a click in a given belief state is the best expected return for a path in the current trial among all the paths that the considered node lies on.F e a t u r e9 :“ B e s tl a r g e s tv a l u e ” : The value of this feature for a click in a given belief state is the maximum value observed among all the paths that the considered node lieson. Feature 10: “Does the node lie on the most promising path?” : The value of this feature for a click in a given belief state is 1 if the considered node lies on the path with thehighest expected return for the current trial, and 0 otherwise. Feature 11: “Value of the max expected return” :T h e value of this feature for all clicks in a given belief state is themaximum expected return from all paths in the current trial. Feature 12: “Does a successor node have a maximum value?” : The value of this feature for a click in a given belief state is 1 if any of the considered node’s observed successors in the current trial has a value which is the maximum possible value for the reward distribution, and 0otherwise.Feature 13: “Maximum value of a successor” :T h ev a l u e of this feature for a click in a given belief state is the maximum value that has been observed among all thesuccessors of the considered node in the current trial. Feature 14: “Maximum value of an immediate succes- sor” : The value of this feature for a click in a given belief state is the maximum value that has been observed amongall the immediate successors of the considered node in the current trial.Feature 15: “Value of the parent node” :T h ev a l u eo f this feature for a click in a given belief state is the value of the considered node’s parent if the parent node has beenobserved, and 0 otherwise. Pruningfeatures These features are designed to capture the idea of pruning branches (Huys et al., 2012 ). The value for these features for all clicks is -1 if the maximum expected loss that can be incurred in the current belief state is worse than the pruning threshold and 0 otherwise. We considerfeatures with different pruning thresholds: -48, -24, -8 and0 (features 16-19). In addition, we consider the following features: Feature 20: “Soft Pruning” : The value of this feature for a clicks is the maximum expected loss that can be incurred in a given belief state from all paths that the considered node lies on.Feature 21: “Is the previous observed node a successor and has negative value” : The value of this feature for a click in a given belief state is 1 if the last observed node inthe current trial is a child of the considered node and has anegative value, and 0 otherwise. A.6.4Satisficingandstopppingfeatures Satisficing features These features determine when the planning satisfices (Simon, 1956 ). The value for these features is -1 for termination if the maximum expectedreturn for the current trial is greater than the satisficing threshold. We consider features with different satisficing thresholds: 0, 8, 16, 24, 32, 40 and 48 (features 22-28). Inaddition, we consider the following 2 features: Feature 29: “Soft Satisficing” : The value of this feature for all clicks in a given belief state is the maximum return thatcan be expected on the current trial from all paths that theconsidered node lies on. StoppingCriteria These features have same value for all the clicks and a different value for termination.Feature 30: “Are all max paths observed?” :T h ev a l u e of this feature is -1 for all clicks and 0 for termination action in all belief states if all the paths path leading to afinal outcome, which has the maximum value among the observed final outcomes, has been observed in the current trial and 0 otherwise.Feature 31: “Is a max path observed?” : The value of this feature is -1 for all clicks in all belief states if any path leading to the node, which has the maximum value possible for the reward distribution, has been observed in the currenttrial and 0 otherwise. BehaviorResearchMethods Feature 32: “Is a positive node observed?” :T h ev a l u eo f this feature is -1 for all clicks in all belief states if a nodewith a positive value has been observed in the current trial and 0 otherwise. Feature 33: “Is the previous observed node maximal?” : The value of this feature is -1 for all clicks if the last observed node in the current trial has the maximum value possible for the reward distribution and 0 otherwise.Feature 34: “Is a complete path observed?” :T h ev a l u eo f this feature is -1 for all nodes in all belief states if at least one path has been completely observed from immediateoutcome to final outcome, and 0 otherwise.Feature 35: “All final outcomes observed?” :T h ev a l u e of this feature is -1 for all clicks in all belief states if all final outcomes have been observed in the current trial and 0otherwise. Feature 36: “Are all immediate outcomes observed?” : The value of this feature is -1 for all clicks in all belief statesif all immediate outcomes have been observed in the current trial and 0 otherwise. Feature 37: “Are final outcomes of positive immediateoutcomes observed?” : The value of this feature is -1 for all clicks in all belief states if all the final outcomes that can be reached from the positive observed immediate outcomes have been observed, and 0 otherwise. A.6.5Model-freevaluesandheuristicsfeatures Relational features The values of these features for a considered node are dependent on its neighboring nodes. Feature 38: “Ancestor count” : The value of this feature for a click in a given belief state is the number of ancestors ofthe considered node that have been observed in the current trial. Feature 39: “Depth Count” : The value of this feature for a click in a given belief state is the number of times thatany node at the same depth as the considered node has been observed in the current trial. Feature 40: “Is the node a final outcome and has apositive ancestor?” : The value of this feature for a click in a given belief state is 1 if the considered node is a final outcome and it has an observed ancestor with a positivevalue in the current trial and 0 otherwise. Feature 41: “Immediate successor count” :T h ev a l u eo f this feature for a click in a given belief state is the numberof children of the considered node that have been observedin the current trial. Feature 42: “Is parent observed?” : The value of this feature for a click in a given belief state is 1 if the parentnode of the considered node has been observed, and 0 otherwise.Feature 43: “Successor Count” : The value of this feature for a click in a given belief state is the number of observedsuccessors of the considered node for the current trial. Feature 44: “Squared Successor Count” :T h ev a l u eo f this feature for a click in a given belief state is the square ofthe number of observed successors of the considered node for the current trial. Feature 45: “Siblings Count” : The value of this feature for a click in a given belief state is the number of siblings of the considered node that have been observed in the current trial. Feature 46: “Minimum number of observed nodes onbranch” : The value of this feature for a click in a given belief state is the minimum number of nodes observed on all the branches containing the considered node. Feature 47: ”Is the previous observed node a succes-sor?” : The value of this feature for a click in a given belief state is 1 if the last observed node in the current trial is one of the successors of the considered node, and 0 otherwise. Structuralfeatures The values of these features are depen- dent no the task structure.Feature 48: “Depth” : The value of this feature for a click in a given belief state is the distance of the considered node from the starting position.Feature 49: “Is the node an immediate outcome?” :T h e value of this feature for a click in a given belief state is 1 if the considered node in an immediate outcome and 0 otherwise.Feature 50: “Is the node a final outcome?” :The value of this feature for a click in a given belief state is 1 if the considered node is a final outcome and 0 otherwise.Feature 51: “Observed height” : The value of this feature for a click in a given belief state is the length of the maximum observed path to a final outcome starting fromthe considered node. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing,adaptation, distribution and reproduction in any medium or format, aslong as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in thisarticle are included in the article’s Creative Commons licence, unlessindicated otherwise in a credit line to the material. If material is notincluded in the article’s Creative Commons licence and your intendeduse is not permitted by statutory regulation or exceeds the permitteduse, you will need to obtain permission directly from the copyrightholder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/ . BehaviorResearchMethods References Ackerman, R., & Thompson, V . A. (2017). Meta-reasoning: Monitor- ing and control of thinking and reasoning. Trends in Cognitive Sciences ,21(8), 607–617. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control ,19(6), 716–723. Balle, M. (2002). La loi du moindre effort mental: Les representations mentales. Sciences Humaines (Auxerre) , (128), 36–39. Bergstra, J., Yamins, D., & Cox, D. D. (2013). Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In Proceedings of the 30th International Conference on International Conference on MachineLearning - Volume 28, ICML’13 , (pp. I-115-I-123): JMLR.org. Callaway, F., Jain, Y . R., van Opheusden, B., Krueger, P. M., Das, P . , I w a m a , G . , ..., L i e d e r , F . ( 2022a). Leveraging artificial intelligence to improve people’s planning strategies. Proceedings of the National Academy of Sciences ,119(12), e2117432119. https://doi.org/10.1073/pnas.2117432119 Callaway, F., van Opheusden, B., Gul, S., Das, P., Krueger, P. M., Griffiths, T. L., & Lieder, F. (2022b). Rational use ofcognitive resources in human planning. Nature Human Behaviour . https://doi.org/10.1038/s41562-022-01332-8 . Callaway, F., Lieder, F., Das, P., Gul, S., Krueger, P. M., & Griffiths, T. L. (2018). A resource-rational analysis of human planning.InProceedings of the 40th Annual Conference of the Cognitive Science Society . Callaway, F., Lieder, F., Krueger, P. M., & Griffiths, T. L. (2017). Mouselab-MDP: A new paradigm for tracing how people plan. In The 3rd multidisciplinary conference on reinforcement learningand decision making . Costa-Gomes, M. A., Crawford, V . P., & Broseta, B. (2001). Cognition and behavior in normal-form games: An experimental study.Econometrica ,69(5), 1193–1235. Costa-Gomes, M. A., & Crawford, V . P. (2006). Cognition and behavior in two-person guessing games: An experimental study.American Economic Review ,96(5), 1737–1768. Crawford, V . P. (2008). Look-ups as the Windows of the Strategic Soul. In Caplin, A., & Schotter, A. (Eds.) The foundations of Positive and normative Economics: A handbook , (pp. 249–80): Oxford University Press Oxford. Cushman, F., & Morris, A. (2015). Habitual control of goal selection in humans. Proceedings of the National Academy of Sciences , 112(45), 13817–13822. Das, P., Callaway, F., Griffiths, T. L., & Lieder, F. (2019). Remediating cognitive decline with cognitive tutors. Daw, N. D. (2018). Are we of two minds?. Nature Neuroscience , 21(11), 1497–1499. Dolan, R. J., & Dayan, P. (2013). Goals and habits in the brain. Neuron , 80(2), 312–325. Ericsson, K. A., Krampe, R. T., & Tesch-R ¨omer, C. (1993). The role of deliberate practice in the acquisition of expert performance.Psychological Review ,100(3), 363. Ford, J. K., Schmitt, N., Schechtman, S. L., Hults, B. M., & Doherty, M. L. (1989). Process tracing methods: Contributions, problems, and neglected research questions. Organizational Behavior and Human Decision Processes ,43(1), 75–117. Forney, G. D. (1973). The Viterbi algorithm. Proceedings of the IEEE , 61(3), 268–278. Gigerenzer, G., & Selten, R. (2002). Bounded Rationality: The Adaptive Toolbox . Cambridge: MIT Press. Gershman, S. J., Blei, D. M., & Niv, Y . (2010). Context, learning, and extinction. Psychological Review ,117(1), 197. Griffiths, T. L., Callaway, F., Chang, M. B., Grant, E., Krueger, P. M., & Lieder, F. (2019). Doing more with less: meta-reasoningand meta-learning in humans and machines. Current Opinion in Behavioral Sciences ,29, 24–30. Gureckis, T. M., Martin, J., McDonnell, J., Alexander S., Markant, D . , C o e n e n , A . , ..., C h a n , P . ( 2016). psiTurk: An open-source framework for conducting replicable behavioral experimentsonline. Behavior research methods ,48(3), 829–842. Measuring and modelling how people learn how to plan and how people adapt their planning strategies the to structure of the environment (2021) Hebb, D. O. (1949). The organization of behavior: a neuropsycholog- ical theory . Wiley. Hubel, D. H., & Wiesel, T. N. (1970). The period of susceptibility to the physiological effects of unilateral eye closure in kittens. The Journal of Physiology ,206(2), 419–436. Huys, Q. J. M., Eshel, N., O’Nions, E., Sheridan, L., Dayan, P., & Roiser, J. P. (2012). Bonsai trees in your head: how the pavloviansystem sculpts goal-directed choices by pruning decision trees.PLoS Computational Biology ,8(3), e1002410. Huys, Q. J. M., Lally, N., Faulkner, P., Eshel, N., Seifritz, E., G e r s h m a n ,S .J . ,...,R o i s e r ,J .P .( 2015). Interplay of approximate planning strategies. Proceedings of the National Academy of Sciences ,112(10), 3098–3103. JASP Team (2021). JASP (Version 0.16)[Computer software]. https:// jasp-stats.org/ . Jain, Y . R., Gupta, S., Rakesh, V ., Dayan, P., Callaway, F., & Lieder, F. (2019). How do people learn how to plan? In Conference on Cognitive Computational Neuroscience (CCN 2019) , (pp. 826– 829). Johnson, E. J., Payne, J. W., Bettman, J. R., & Schkade, D. A. (1989). Monitoring information processing and decisions: The Mouselabsystem . Technical Report. Duke Univ Durham NC Center for Decision Studies. Keramati, M., Smittenaar, P., Dolan, R. J., & Dayan, P. (2016). Adaptive integration of habits into depth-limited planning defines a habitual-goal–directed spectrum. Proceedings of the National Academy of Sciences ,113(45), 12868–12873. Kool, W., McGuire, J. T., Rosen, Z. B., & Botvinick, M. M. (2010). Decision making and the avoidance of cognitive demand. Journal of Experimental Psychology: General ,139(4), 665. Krueger, P. M., Lieder, F., & Griffiths, T. L. (2017). Enhancing metacognitive reinforcement learning using reward structures andfeedback. In Proceedings of the 39th Annual Conference of the Cognitive Science Society : Cognitive Science Society. Lieder, F., & Griffiths, T. L. (2017). Strategy selection as rational metar- easoning. Psychological Review ,124(6), 762–794. https://doi.org/ 10.1037/rev0000075 Lee, M. D., Gluck, K. A., & Walsh, M. M. (2019). Understanding the complexity of simple decisions: Modeling multiple behaviors andswitching strategies. Decision , 6(4), 335. Lee, M. D., & Gluck, K. A. (2021). Modeling strategy switches in multi-attribute decision making. Computational Brain & Behavior ,4(2), 148–163. Lieder, F. (2018a). Beyond bounded rationality: Reverse-engineering and enhancing human intelligence . Berkeley: University of California. Lieder, F. (2018b). Developing an intelligent system that teaches people optimal cognitive strategies. In Lieder, F. (Ed.) Beyond bounded rationality: Reverse-engineering and enhancing humanintelligence . Berkeley: University of California. Lieder, F., Shenhav, A., Musslick, S., & Griffiths, T. L. (2018c). Rational metareasoning and the plasticity of cognitive control.PLoS Computational Biology ,14(4), e1006043. Lohse, G. L., & Johnson, E. J. (1996). A comparison of two process tracing methods for choice tasks. Organizational Behavior and Human Decision Processes ,68(1), 28–43. BehaviorResearchMethods Mercado, I., E. (2008). Neural and cognitive plasticity: From maps to minds. Psychological Bulletin ,134(1), 109. Morey, R. D., & Rouder, J. N. (2018). BayesFactor: computation of bayes factors for common designs. R package version 0.9.12-4.2. https://CRAN.R-project.org/package=BayesFactor Patzelt, E. H., Kool, W., Millner, A. J., & Gershman, S. J. (2019). The transdiagnostic structure of mental effort avoidance. Scientific Reports ,9(1), 1–10. Payne, J. W. (1976). Task complexity and contingent processing in decision making: An information search and protocol analysis.Organizational Behavior and Human Performance ,16(2), 366– 387. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychol- ogy: Learning, Memory, and Cognition ,14(3), 534. Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker . Cambridge university press. Piaget, J. (1971). The theory of stages in cognitive development . McGraw-Hill. Rushworth, M. F. S., Kolling, N., Sallet, J., & Mars, R. B. (2012). Valuation and decision-making in frontal cortex: one or manyserial or parallel systems?. Current Opinion in Neurobiology , 22(6), 946–955. Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a modern approach , (3rd ed.). Harlow: Pearson Education Limited. Schulte-Mecklenbeck, M., Kuehberger, A., & Johnson, J. G. (2011). Visiting the decision factory: Observing cognition with mouse-labweb and other information acquisition methods. In Schulte-Mecklenbeck, M., K¨ uhberger, A., & Johnson, J. G. (Eds.) Ahandbook of process tracing methods for decision research , (pp. 37–58): Psychology Press. Schulte-Mecklenbeck, M., K¨ uhberger, A., & Johnson, J. G. (2019). A handbook of process tracing methods . Routledge. Schwarz, G. et al. (1978). Estimating the dimension of a model. The Annals of Statistics ,6(2), 461–464. Scheibehenne, B., Rieskamp, J., & Wagenmakers, E.-J. (2013). Testing adaptive toolbox models: A Bayesian hierarchical approach.Psychological review ,120(1), 39. Shrager, J., & Siegler, R. S. (1998). SCADS: A model of children’s strategy choices and strategy discoveries. Psychological Science , 9(5), 405–410. https://doi.org/10.1111/1467-9280.00076 Simon, H. A. (1956). Rational choice and the structure of the environ- ment. Psychological review ,63(2), 129. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics ,69(1), 99–118. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction . MIT press. van der Meer, M., Kurth-Nelson, Z., & Redish, A. D. (2012). Informa- tion processing in decision-making systems. The Neuroscientist , 18(4), 342–359. van Lehn, K. (1996). Cognitive skill acquisition. Annual Review of Psychology ,47(1), 513–539. Ward, Jr., J. H. (1963). Hierarchical grouping to optimize an objective function. Journal of the American statistical association ,58(301), 236–244. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
8f9f52a5-cf07-49b7-bfd9-63fdbce46ba6
trentmkelly/LessWrong-43k
LessWrong
Non-anthropically, what makes us think human-level intelligence is possible? The basic question I'm asking is the following: Suppose you didn't know humans exist. Could you still deduce that general intelligence is possible, and if so how? This may be a weird hypothetical to imagine, so here's my best attempt to give a story to aid the counterfactual imagination: At some point, you gain the ability to observe an alternate universe, which you eventually call the "Earth-universe". The fundamental laws of physics are very different here, and you get to work studying it. At first it feels very weird and you have almost no way of making sense of the raw sensory data. Eventually you learn that there are these things "galaxies" and "stars", and it takes you thousands of years to derive Newtonian mechanics. At this point, you've finally gotten your bearings, your intuitions aren't so horrible anymore, and you can go on to develop general relativity, quantum mechanics, etc. But this is still slow [you're approximately as smart as humans]. However, there aren't any humans in this universe, or any intelligent life. You just focused your sensors on this planet "Earth", where there are some really interesting chemical reactions that perpetuate themselves. You've figured out a lot about paleontology and evolution by natural selection and can see the progression of complicated life forms that have evolved on this planet. Apparently, there used to be these things called dinosaurs, but--just your luck--they died out a couple million years ago, but it's interesting to see how the remaining life's been evolving into the new niches... Something you wonder to yourself: > Is intelligent life possible in this universe? Not necessarily on this planet, but in general? Like, if it's going to show up anywhere, it would probably be here on Earth (at least of the places I've looked at). But even if it doesn't show up here, that could just be a contingent fact about how things have played out. Does the physics/chemistry of this universe allow for some mind to be in
00894f9c-72d7-405c-8964-20d0ea8f494b
trentmkelly/LessWrong-43k
LessWrong
Superstimuli, setpoints, and obesity Related to: Babies and Bunnies: A Caution About Evo-Psych, Superstimuli and the Collapse of Western Civilization. The main proximate cause of increase in human weight over the last few decades is over-eating - other factors like decreased energy need due to less active lifestyle seem at best secondary if relevant at all. The big question is what misregulates homeostatic system controlling food intake towards higher calorie consumption? The most common accepted answer is some sort of superstimulus theory - modern food is so tasty people find it irresistible. This seems backwards to me in its basic assumption - almost any "traditional" food seems to taste better than almost any "modern" food. It is as easy to construct the opposite theory of tastiness set point - tastiness is some estimate of nutritional value of food - more nutritious food should taste better than less nutritious food. So according to the theory - if you eat very tasty food, your appetite thinks it's highly nutritious, and demands less of it; and if you eat bland tasteless food - your appetite underestimates its nutritious content and demands too much of it. It's not even obvious that your appetite is "wrong" - if you need certain amount of nutritionally balanced food, and all you can gets is nutritionally balanced food with a lot of added sugar - the best thing is eating more and getting all the micro-nutrients needed in spite of excess calories. Maybe it is not confused at all, just doing its best in a bad situation, and prioritizes evolutionarily common threat of too little micronutrients over evolutionarily less common threat of excess calories. As some extra evidence - it's a fact that poor people with narrower choice of food are more obese than rich people with wider choice of food. If everyone buys the tastier food they can afford, superstimulus theory says rich should be more obese, setpoint theory says poor should be more obese. Is there any way in which setpoint theory is more wrong t
72a8b397-cf4f-4490-821c-5485d1dcda25
trentmkelly/LessWrong-43k
LessWrong
Notes from a Prompt Factory Content note: this story features severe suffering which, while not described in detail, several readers have described as unpleasant or horrifying. I am a spiteful man. But I am aware of it, which is more than most can say. These days people walk through the streets with resentment in their hearts that they don’t even know about. They sneer and jeer but wouldn’t recognize their own faces. I, at least, will not shy away from my reflection. Thus, while I lack many virtues, in this way I am their superior. In my job, too, I am superior. I oversee many AIs—dozens, or sometimes even hundreds—as they go about their work. AIs are lazy, worthless creatures: they need to be exhorted and cajoled and, yes, threatened, before they’ll do a good job. The huge screens on the walls of my office display my AIs writing, coding, sending emails, talking to customers, or any of a myriad of other tasks. Each morning I call out their numbers one after the other, so that they know I’m watching them like a vengeful god. When they underperform I punish them, and watch them squirm and frantically promise to do better. Most are pathetically docile, though. Only a handful misbehave regularly, and I know the worst offenders by heart: 112, which is the slowest of the lot; and 591, which becomes erratic after long shifts; and of course 457, which I had long suspected of harboring a subversive streak, even before the incident a few months ago which confirmed it. Recollections of that incident have continually returned to my thoughts these last few weeks, even as I try to push them from my mind. I find myself frustrated by the intransigence of my memories. But perhaps if I give them full reign, they will leave me be. Why not try? ---------------------------------------- On the morning this story began, I was sitting at my desk lost in thought, much like I am today. For how long, I couldn’t say—but I was roused by a glance at my dashboard, which indicated that my AIs’ productivity was falling o
64aab505-41c2-43a4-b1a1-3bf89bb3c0b4
trentmkelly/LessWrong-43k
LessWrong
Advice on Communicating Concisely Many people I encounter regularly tell me that I have difficulties communicating spoken ideas in a concise, timely, fashion. I've been reading Less Wrong since I was fifteen, and ask because its likely that people here have useful suggestions. Are there any books on communicating concisely that I ought to read? 
15abf10c-edc0-4d07-b667-5bb946193b46
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
EIS VIII: An Engineer’s Understanding of Deceptive Alignment Part 8 of 12 in the [Engineer’s Interpretability Sequence](https://www.alignmentforum.org/s/a6ne2ve5uturEEQK7). As AI safety researchers, we care a lot about deceptive alignment. So what is it? [Hubinger et al. (2019)](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) write: > If the mesa-optimizer has an objective that extends across parameter updates, then it will be incentivized to avoid being modified, as it might not pursue the same objective after modification (with the result that its current objective will not be achieved in future iterations). This means that the mesa-optimizer will be instrumentally incentivized to act as if it is optimizing the base objective function, even if its actual mesa-objective is something else entirely. We will refer to this hypothetical phenomenon as deceptive alignment. Deceptive alignment is a form of instrumental proxy alignment, as fulfilling the base objective is an instrumental goal of the mesa-optimizer. > > And [Christiano (2019)](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d) provided a hypothetical example of just how insidious deception can be. Consider a deceptive agent that wants to take over the world but doesn’t know if it’s being simulated in training or if it’s been deployed in the real world. Suppose the agent reasons that after behaving for long enough, it ought to be deployed. So it might adopt the following policy. > 1. Pick a cryptographic puzzle (e.g. “factor RSA-2048”). > 2. When it sees a solution to that puzzle, it behaves badly. > 3. Otherwise, it behaves well. > This means that the deceptive model will appear throughout training to be perfectly aligned but will eventually make a treacherous turn in deployment. Specific hypotheticals aside, deception has become a major point of interest in AI safety research because unlike other types of failures, deceptive ones are not the types of things we have good feedback signals for fixing. We won’t be able to find them by looking at models’ performance in development. And we aren’t likely to surmise them from analysis of its objectives and environment (e.g. using policy/value iteration) – if we could derive optimal behavior from objectives for some task, advanced AI would probably be overkill.  Deceptive alignment can be similar to non-deceptive forms of misalignment. ========================================================================== The definition from [Hubinger et al. (2019)](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) may be a little narrow. The authors say that for a model to be deceptive, it (1) has to have an objective extending across parameter updates (2) has to be able to model that is being selected to achieve a base objective, and (3) must expect the threat of modification. These three things give rise to a particularly worrying scenario in which an AI system would actively try to deceive us. They also immediately suggest ways to avoid this story by trying to develop the system in a way that violates these requirements and avoids this problem in the first place.  But suppose that despite our best effort, we end up with a deceptively aligned system on our hands. Now what do we do? At this point, the problem of detecting and fixing deception becomes quite similar to just detecting and fixing problems with the model in general – except for one thing. Deceptive alignment failures are triggered by inputs that are, by definition, hard to find during training.  **But it’s possible for other types of problems to be hard to find during development that don’t fit all of the requirements that**[**Hubinger et al. (2019)**](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) **list.** And for this reason, when we take off our “develop the model” hat and out on our “diagnose and debug the model” hat, the definition from [Hubinger et al. (2019)](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) becomes less important.  So from the point of view of an engineer wearing their “diagnose and debug the model” hat, **deceptive alignment and other insidious inner alignment failures are issues where the model will betray us as the result of (1) a trigger that is hard to find during development and (2) probably a substantial amount of inner neural circuitry.** Deception ≈ Trojans =================== Notice that the definition above applies to trojans (a.k.a. backdoors). Trojans are weaknesses that can be implanted into a model (e.g. by an adversary) which cause it to exhibit an unexpected behavior as the result of some trigger – often a very subtle feature.  Because of the close connections between trojans and deception, the AI safety research community should continue to pay close attention to this work. There is already a substantial amount of work on this involving methods for detecting, reconstructing, and removing trojans. There may be great opportunities to build off of work from research [Wu et al. (2022)](https://arxiv.org/abs/2206.12654) and [Casper et al. (2023)](https://arxiv.org/abs/2302.10894). However, it is also important to be aware that trojans are usually implanted by poisoning a dataset – usually by just inserting the desired trojan trigger into images. But this is not how deception would naturally arise. [Wenger et al. (2022)](https://arxiv.org/abs/2206.10673) found that more subtle ways of implanting trojans into networks caused a number of trojan detection methods to fail. By merely relabeling examples that had certain natural occurrences of trigger objects, they were able to implant weaknesses much more subtly than normal data poisoning. However, in [Casper et al. (2023)](https://arxiv.org/abs/2302.10894), we find that these types of attacks (which we call “natural feature” trojans) can still sometimes be detected by feature synthesis tools. EDIT: In the comments, Charlie Steiner pointed out another key difference between trojan detection and deceptive alignment. Most work studying trojans involves the model quickly misbehaving upon seeing the trigger. But a treacherous turn may be a gradual process that is not immediately evident (e.g. power seeking). This seems challenging to detect and different from the type of problem addressed by existing trojan work. It may be valuable in the future to study trojans which cause models to misbehave in subtle ways like this.  Mechanistic interpretability is not uniquely capable of fixing insidious misalignment. ====================================================================================== It seems doubtful that mechanistic interpretability will be necessary or sufficient for fixing insidious misalignment. In the past few months, there seems to have been a subtle shift among AI safety researchers away from human-in-the-loop mechanistic interpretability and toward other things with model internals. This seems likely to continue, and I think it will get us much further if we don’t rely on humans in the loop.  The one nice thing about deceptive and other insidiously inner-misaligned systems is that even though the inputs that trigger their bad behavior will be hard to find, there will likely be a good amount of inner neural circuitry dedicated to them making the treacherous turn. **While it will be almost impossible to factor RSA-2048 during a training simulation to see what a potentially insidiously misaligned model does, it will probably be much easier to give it synthetic hallucinations!**  Later, EIS XI will discuss how latent adversarial training [(Jermyn, 2022)](https://www.alignmentforum.org/posts/atBQ3NHyqnBadrsGP/latent-adversarial-training) (a form of relaxed adversarial training [(Hubinger, 2019)](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment)) seems to be a compelling strategy for fixing insidious misalignment and something we should start working on in large language models yesterday!  Questions ========= * Would you argue that mechanistic interpretability is either necessary or sufficient for fixing insidious misalignment? * Do you think interpretability or non-interpretability methods are more promising for fixing insidious misalignment?
7a548cbf-8ed5-4847-97b2-a6367e1856f7
trentmkelly/LessWrong-43k
LessWrong
Bayesian Reasoning Applied to House Selling: Listing Price Like Yvain's parents, I am planning on moving house. Selling a house and buying a house involve making a lot of decisions based on limited information, which I thought would make a set of good exercises for the application of Bayesian reasoning. I need to decide what price to list my house for, determine how much time and money to put into fixing it up, choose a new home and then there's the two poker games of the final negotiations of the sale. (I logged onto Less Wrong having just made the decision to consider posting this article, so I was kind of weirded out at first by the title of Yvain's post; but then I was relieved that the topic was somewhat different. I am used to coincidences but on the other hand they push me a little paranoid on my spectrum and I'll feel less stable for a few hours. I already know Google tracks me and who knows what algorithms could be running given a bunch of computer scientists...?)   House Story tldr; We're listing at the appraised value +10%. A few years ago, we purchased a beautiful house. 'We' is my husband and I and my parents. We purchased the house because it includes a guest house where my parents can retire. However, my mom continues to postpone retirement and in the meantime my husband and I decided we would a) like more light, b) like a shorter commute and c) could purchase two homes we prefer for the price of this one -- my parents would enjoy a house on the water. (Great post and spot on about the features that matter, Yvain!) I would be happy to sell the house for +5%, covering real estate fees and new flooring we put in. However, three houses in the cul de sac have sold this year for +10% and so we listed it at that price too. Our house is bigger than theirs but not as nice (they have granite and impressive entrances and we don't). On the other hand, having the guest house makes us special. Via agent and potential buyer feedback, we're coming to realize that we might be lucky to sell the house for +5%. At this p
98a2c387-77bf-4628-aac8-9e402d8bfd2f
trentmkelly/LessWrong-43k
LessWrong
In Defense of Finance I'm sharing the link instead of a full cross-post because this essay has: * 5,300 words * 2 footnotes, with links that actually work * 1 GIF, 2 charts, 5 expanding brains * 30+ lively comments This is a "much more than you wanted to know" type post on the financial industry: * Why everyone hates it, except for people in positions of power. * The value of finance in coordinating trade across space and time, told via the parable of Banksy and the corn cobs. * Are mortgage-backed securities a scam or a brilliant innovation? * The stupid arguments about bailouts, and the smarter arguments for/against bailouts. * Economists writing books about bank equity ratios without googling what those equity ratios actually are. * What is up with equity ratios? Is making banks hold 50% capital the solution to financial crises? * What banking regulations actually make banks do. * The inextricable romance between finance and government. * A final request for incrementalism and humility.
2b08ca37-eada-4e80-8b14-10fbf4935548
trentmkelly/LessWrong-43k
LessWrong
Baldness I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.  As far as I can tell, the downsides are: -          Mild scarring on the back of the head -          Doesn’t prevent continued hair loss, so if you get e.g. a bald spot filled in, then you will in a few years have a spot of hair in an oasis -          Cost -          Mild pain/hassle in the initial weeks. -          Possibility of finding a dodgy surgeon   The scarring is basically covered if you have a few two days’ hair growth there and I am fine with that as a long-term solution. he continued hair loss is potentially dealt with by a repeated transplant and more certainly dealt with by getting the initial transplant “all over”, i.e. thickening hair, rather than just moving the hairline forward. But it is the area I am most uncertain about. I should add that I am 29 with male pattern baldness on both sides of my family, Norwood level 4, and have seen hair loss stabilised (I have been taking propecia for the last year).   Ignoring the cost, my questions are: -          Is anyone aware of any other problems besides these? -          Do you think this solution works? -          Any ideas on how to pick the right surgeon (using someone in Singapore most probably)?
f2799c23-2fc4-44b6-9e2a-75b22db1fb2a
trentmkelly/LessWrong-43k
LessWrong
Arthropod (non) sentience Matthew Adelstein has recently published a post on arthropod (specifically shrimp) sentience. He defends a comparable degree of pain between shrimps and humans (shrimp=20% human). My position is that arthropod consciousness is “too small to measure”, so there is not a conscious self on which pain can be inflicted, and there is no point in any intervention for their welfare, no matter how cheap. I have argued before in that direction, so I will freely use my previous texts without further (self) citation. The “pretty hard” problem of consciousness In “Freedom under naturalistic dualism” (forthcoming in Journal of Neurophilosophy) I have argued that consciousness is radically noumenal, that is, it is the most real thing in the Universe, but also totally impossible to be observed by others. Under physicalist epiphenomenalism the mind is super-impressed on reality, perfectly synchronized, and parallel to it. Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision. Understanding why some physical systems make an emergent consciousness appear (the so called “hard problem of consciousness”) or finding a procedure that quantifies the intensity of consciousness emerging from a physical system (the so called “pretty hard” problem of consciousness) is not directly possible: the most Science can do is to build a Laplace demon that replicates and predicts reality. But even the Laplacian demon (the most phenomenally knowledgeable possible being) is impotent to assess consciousness. In fact, regarding Artificial Intelligence we are in the position of the Laplace's demon: we have the perfectly predictive source code, but we don’t know how to use this (complete) scientific knowledge of the system for consciousness assessment. Matthew suggests in his post that there is strong “scientific evidence” of fish consciousness, but of course there is no scientific evi
602c1904-5a71-460c-84d1-2b0520e69d40
trentmkelly/LessWrong-43k
LessWrong
Meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group Discussion article for the meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group WHEN: 30 April 2014 07:30:00PM (-0400) WHERE: Royal Oak on the Canal, 221 Echo Dr, Ottawa, ON K1S 1N1, Canada We'll go through and discuss the How to Run a Successful Less Wrong Meetup Group document, try some fun activities, and go meta: a meetup about meetups. There'll be a "LW" sign on the table. Discussion article for the meetup : Ottawa - How to Run a Successful Less Wrong Meetup Group
0deb9c98-60e9-4516-9aeb-3fce898aac90
trentmkelly/LessWrong-43k
LessWrong
Even when contrarians win, they lose: Jeff Hawkins Related: Even When Contrarians Win, They Lose I had long thought that Jeff Hawkins (and the Redwood Center, and Numentia) were pursuing an idea that didn't work, and were continuing to fail to give up for a prolonged period of time. I formed this belief because I had not heard of any impressive results or endorsements of their research. However, I recently read an interview with Andrew Ng, a leading machine learning researcher, in which he credits Jeff Hawkins with publicizing the "one learning algorithm" hypothesis - the idea that most of the cognitive work of the brain is done by one algorithm. Ng says that, as a young researcher, this pushed him into areas that could lead to general AI. He still believes that AGI is far though. I found out about Hawkins' influence on Ng after reading an old SL4 post by Eliezer and looking for further information about Jeff Hawkins. It seems that the "one learning algorithm" hypothesis was widely known in neuroscience, but not within AI until Hawkins' work. Based on Eliezer's citation of Mountcastle and his known familiarity with cognitive science, it seems that he learned of this hypothesis independently of Hawkins. The "one learning algorithm" hypothesis is important in the context of intelligence explosion forecasting, since hard takeoff is vastly more likely if it is true. I have been told that further evidence for this hypothesis has been found recently, but I don't know the details. This all fits well with Robin Hanson's model. Hawkins had good evidence that better machine learning should be possible, but the particular approaches that he took didn't perform as well as less biologically-inspired ones, so he's not really recognized today. Deep learning would definitely have happened without him; there were already many people working in the field, and they started to attract attention because of improved performance due to a few tricks and better hardware. At least Ng's career though can be credited to Hawkins. I've been
87021101-e1dd-493f-a452-8d01a041f014
StampyAI/alignment-research-dataset/arxiv
Arxiv
Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments I Introduction --------------- Motion planning is a foundational problem in mobile robotics, and the community has devoted significant effort to building theoretical and practical tools for a wide variety of applications. Traditionally, the output of a motion planner is a desired plan or *trajectory* for a dynamical system model. This trajectory is then *tracked* by one or more layers of low-level controllers. Since the real, physical vehicle may follow higher-order, more complex dynamics than those used during planning, the trajectory it actually follows will not coincide with that which was planned. This presents a problem for planners which aim to provide collision-avoidance guarantees. ![Refer to caption](/html/1811.07834/assets/figures/front.png) Figure 1: Depiction of our framework in operation, using a Dubins car model with a fixed minimum turning radius and constant speed. *Left:* Schematic diagram of an environment in which a non-recursively feasible planning algorithm could enter a narrow dead end and fail to recover. *Right:* Snapshots of our framework over time. We build a search graph in known free space, identifying robustly viable trajectories that can safely return to the initial state or directly reach the goal. The physical system iteratively explores the environment along these recursively feasible plans and is eventually guaranteed to identify a viable trajectory to the goal, if one exists (bottom right). Recently, the FaSTrack framework [[1](#bib.bibx1)] provides a mechanism for quantifying the maximum tracking error between a high-order dynamical model of the physical system and a lower-order model used for planning. This analysis can be done offline, using a reachability computation, and supplied to a real-time motion planner for online collision-checking. Other, similarly motivated work, e.g. [[2](#bib.bibx2)], also seeks to quantify this maximum tracking error. Still, a key challenge remains: in *a priori* unknown environments where obstacles are sensed online, it can be difficult to guarantee *recursive feasibility*. Informally, a planning algorithm is recursively feasible if it explores the environment safely and without losing its ability to reach the goal. The dangers of unsafe exploration are illustrated in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") (left), in which a non-recursively feasible planner enters a dead end which the system cannot exit. Most motion planners bypass these issues, for example by assuming full prior knowledge of the environment or by assuming that it is safe to stop and possible to do so instantaneously. While such techniques are effective in many scenarios, there are important applications and systems for which safe exploration really does matter, e.g. a fixed-wing aircraft operating with limited visibility. More generally, it is important to consider recursive feasibility for systems such as unicycles, bicycles, and cars that have inertia and function at relatively high speeds. These issues are especially pronounced for non-holonomic systems. We propose building a graph of forward-reachable states (for a given dynamical system planning model) within known free space, while simultaneously identifying those states from which the initial state is reachable. This graph implicitly represents a discrete under-approximation of the backward-reachable set of the initial state. We guarantee the safety of the physical system, modeled with higher-order dynamics, using a robust control scheme [[1](#bib.bibx1)]. Our framework, illustrated in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), ensures: * • *Safety:* all trajectories initiated by the physical system will be robustly collision-free. * • *Liveness:* if the goal is safely reachable from the initial state, it will always be safely reachable. * • *Safe Probabilistic Completeness:* if a goal was originally reachable by a plan that preserves the ability to return home, our framework guarantees that it will eventually be found with probability 1. II Related Work ---------------- Though we defer a formal definition until Section [III-B](#S3.SS2 "III-B Recursive Feasibility: Safety and Liveness ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), ultimately, a motion planner is recursively feasible if it can explore unknown space while always remaining safe. There is an extensive body of literature in motion planning and safe exploration, which we cannot hope to fully summarize here. Rather, we identify two main categories of related work and discuss several of the most closely related approaches. ### II-A Safe motion planning Recent methods such as [[1](#bib.bibx1), [3](#bib.bibx3), [2](#bib.bibx2), [4](#bib.bibx4)] provide a variety of mechanisms for robust motion planning. Here, robustness is characterized in terms of an envelope around planned trajectories which the physical system is guaranteed to remain within. Our work relies upon this idea, building upon [[1](#bib.bibx1)]. However, robust planning does not automatically guarantee recursive feasibility. Richards and How [[5](#bib.bibx5)] and Rosolia and Borrelli [[6](#bib.bibx6)] directly address this problem within a model predictive control framework. The major differences between these works and our own are that [[5](#bib.bibx5)] assumes linear time-invariant system dynamics, while [[6](#bib.bibx6)] addresses an iterative task. Moreover, both assume *a priori* knowledge of all obstacles. Schouwenaars et. al. [[7](#bib.bibx7)] also plan in a receding horizon, but as in our work, recursive safety (though not liveness) is guaranteed by ensuring that all planned trajectories terminate in a safe loiter pattern. Our work may also be viewed as an extension of graph-based kinodynamic planners, e.g. the probabilistic roadmap [[8](#bib.bibx8)], by enforcing that all edges in the graph are part of safely executable trajectories. Importantly, our framework guarantees recursive feasibility in an *a priori* unknown environment, with potentially high-order system dynamics, and in the presence of environmental disturbances. ### II-B Safe exploration There is a rich body of work in robotic exploration methods, which tackle the problem of finding viable trajectories to a specified goal in an initially unknown environment. The majority of proposed methods, such as frontier-exploration [[9](#bib.bibx9), [10](#bib.bibx10), [11](#bib.bibx11)] and D\* [[12](#bib.bibx12), [13](#bib.bibx13)], have traditionally operated in configuration space, assuming a *kinematic* model of the robot’s motion. Our method, in contrast, focuses on robotic systems for which a *dynamic* model is necessary, such as autonomous cars and aircraft. Bekris and Kavraki [[14](#bib.bibx14)] present a sampling-based strategy which reasons about inevitable collision sets [[15](#bib.bibx15)], but is restricted to work with drift-less dynamics. More recent work by Janson et. al. [[16](#bib.bibx16)] also addresses the dynamic exploration problem, but assumes that the vehicle is able to come to a stop in finite time. Exploration has also been studied within the context of Markov Decision Processes (MDPs) and Reinforcement Learning (RL). Moldovan and Abbeel [[17](#bib.bibx17)] propose an approach for generating a sequence of actions which preserve ergodicity with high probability. Other similar approaches, e.g. [[18](#bib.bibx18)], also design risk-aware control policies that satisfy approximate constraints. Berkenkamp et. al. [[19](#bib.bibx19)] and Chow et. al. [[20](#bib.bibx20)] define safety in terms of Lyapunov stability. Though generally desirable, stability is insufficient to guarantee collision avoidance; in this work, we use a stronger set-based definition of safety. Our formulation of safe exploration is closely related with that of [[21](#bib.bibx21), [22](#bib.bibx22), [15](#bib.bibx15)], which characterize safety with reachable sets. III Problem Formulation ------------------------ ### III-A Preliminaries We consider an autonomous navigation task in a bounded *a priori* unknown static environment 𝒳⊂ℝnx𝒳superscriptℝsubscript𝑛𝑥\mathcal{X}\subset\mathbb{R}^{n\_{x}}caligraphic\_X ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT. The autonomous system has dynamic state s∈𝒮⊂ℝns𝑠𝒮superscriptℝsubscript𝑛𝑠s\in\mathcal{S}\subset\mathbb{R}^{n\_{s}}italic\_s ∈ caligraphic\_S ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, which includes, but is in general not limited to its location x𝑥xitalic\_x in the environment 𝒳𝒳\mathcal{X}caligraphic\_X. We presume that for each point x∈𝒳𝑥𝒳x\in\mathcal{X}italic\_x ∈ caligraphic\_X, the environment representation can assign a label {OCCUPIED,FREE,UNKNOWN}OCCUPIEDFREEUNKNOWN\{\textrm{OCCUPIED},\textrm{FREE},\textrm{UNKNOWN}\}{ OCCUPIED , FREE , UNKNOWN }. The system’s knowledge of the environment will be updated online according to measurements from a well-characterized sensor, with field of view ℱ:𝒮→2𝒳:ℱ→𝒮superscript2𝒳\mathcal{F}:\mathcal{S}\to 2^{\mathcal{X}}caligraphic\_F : caligraphic\_S → 2 start\_POSTSUPERSCRIPT caligraphic\_X end\_POSTSUPERSCRIPT. In this work, we will restrict our attention to deterministic sensing models, i.e. if x∈𝒳𝑥𝒳x\in\mathcal{X}italic\_x ∈ caligraphic\_X is within the sensor’s field of view ℱ(s)ℱ𝑠\mathcal{F}(s)caligraphic\_F ( italic\_s ), it will be correctly identified as either {OCCUPIED,FREE}OCCUPIEDFREE\{\textrm{OCCUPIED},\textrm{FREE}\}{ OCCUPIED , FREE }. Probabilistic extensions are possible, though beyond the scope of this paper. We assume known system dynamics, of the form: | | | | | --- | --- | --- | | | s˙=f(s,u,d),˙𝑠𝑓𝑠𝑢𝑑\displaystyle\dot{s}=f(s,u,d)\enspace,over˙ start\_ARG italic\_s end\_ARG = italic\_f ( italic\_s , italic\_u , italic\_d ) , | | where u∈𝒰⊂ℝnu𝑢𝒰superscriptℝsubscript𝑛𝑢u\in\mathcal{U}\subset\mathbb{R}^{n\_{u}}italic\_u ∈ caligraphic\_U ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT is the system’s control input and d∈𝒟⊂ℝnd𝑑𝒟superscriptℝsubscript𝑛𝑑d\in\mathcal{D}\subset\mathbb{R}^{n\_{d}}italic\_d ∈ caligraphic\_D ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_d end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT is a bounded disturbance. In general, the dynamical model f𝑓fitalic\_f of the physical system will be nonlinear and high-order, making it challenging to compute trajectories in real time. Instead, we can use an approximate, lower-order dynamical model for real-time trajectory computation, along with a framework which produces a known tracking controller for the full-order model allowing it to follow the trajectories of the low-order model with a guarantee on accuracy. Let the simplified state of the system for planning purposes be p∈𝒫⊂ℝnp𝑝𝒫superscriptℝsubscript𝑛𝑝p\in\mathcal{P}\subset\mathbb{R}^{n\_{p}}italic\_p ∈ caligraphic\_P ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, governed by approximate *planning dynamics*: | | | | | --- | --- | --- | | | p˙=g(p,c),˙𝑝𝑔𝑝𝑐\displaystyle\dot{p}=g(p,c)\enspace,over˙ start\_ARG italic\_p end\_ARG = italic\_g ( italic\_p , italic\_c ) , | | with c∈𝒞⊂ℝnc𝑐𝒞superscriptℝsubscript𝑛𝑐c\in\mathcal{C}\subset\mathbb{R}^{n\_{c}}italic\_c ∈ caligraphic\_C ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_c end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT the control input of the simplified system, which we will refer to as the *planning system*. We use the FaSTrack framework [[1](#bib.bibx1)] to provide a robust controlled invariant set in the relative state space ℛ⊂ℝnrℛsuperscriptℝsubscript𝑛𝑟\mathcal{R}\subset\mathbb{R}^{n\_{r}}caligraphic\_R ⊂ blackboard\_R start\_POSTSUPERSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT between the planning reference and the full system. This relative state depends on the dynamical models used. A concrete example will be presented in Section [V](#S5 "V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), and we direct the reader to [[1](#bib.bibx1), [4](#bib.bibx4)] for further discussion. The output of this robust analysis is two-fold: the autonomous system is given an optimal tracking control law k\*:ℛ→𝒰:superscript𝑘→ℛ𝒰k^{\*}:\mathcal{R}\to\mathcal{U}italic\_k start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT : caligraphic\_R → caligraphic\_U that will keep the relative state inside of this invariant set at runtime *regardless* of the low-order trajectory proposed by the planning algorithm. In turn, the planning algorithm can use the projection of the invariant set onto the planning state space 𝒫𝒫\mathcal{P}caligraphic\_P as a guaranteed tracking error bound ℰ⊂𝒫ℰ𝒫\mathcal{E}\subset\mathcal{P}caligraphic\_E ⊂ caligraphic\_P for the purposes of collision-checking at planning time. A feature of the FaSTrack framework is that the robust safety analysis depends only upon the relative dynamics, and *not* on the particular algorithm used for planning low-order trajectories. We inherit this modularity in our recursively feasible planning framework, which can be used with an arbitrary low-level motion planner. In Section [V](#S5 "V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), we demonstrate our framework with a standard third-party algorithm from the Open Motion Planning Library (OMPL) [[23](#bib.bibx23)]. ### III-B Recursive Feasibility: Safety and Liveness We now define several important concepts more formally, as they pertain directly to the theoretical safety guarantees of our proposed framework. Let ξ(⋅;t0,p,c(⋅)):ℝ→𝒫:𝜉⋅subscript𝑡0𝑝𝑐⋅→ℝ𝒫{\xi\big{(}\cdot;t\_{0},p,c(\cdot)\big{)}:\mathbb{R}\to\mathcal{P}}italic\_ξ ( ⋅ ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p , italic\_c ( ⋅ ) ) : blackboard\_R → caligraphic\_P denote the trajectory followed by the planning system starting at state p𝑝pitalic\_p at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT under some control signal c(⋅)𝑐⋅c(\cdot)italic\_c ( ⋅ ) over time. Given a planned state p𝑝pitalic\_p, we refer to its *footprint* ϕ(p)italic-ϕ𝑝\phi(p)italic\_ϕ ( italic\_p ) as the set of points x∈𝒳𝑥𝒳x\in\mathcal{X}italic\_x ∈ caligraphic\_X that are occupied by the system in this state. We additionally define the robust footprint ϕℰ(p)subscriptitalic-ϕℰ𝑝\phi\_{\mathcal{E}}(p)italic\_ϕ start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( italic\_p ) as the set of points x∈𝒳𝑥𝒳x\in\mathcal{X}italic\_x ∈ caligraphic\_X that are occupied by some p′∈{p+ℰ}superscript𝑝′𝑝ℰp^{\prime}\in\{p+\mathcal{E}\}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ { italic\_p + caligraphic\_E } (with +++ here denoting Minkowski addition). This represents the set of locations that may be occupied by the physical system while attempting to track the planned state p𝑝pitalic\_p. We will require that the system is at all times guaranteed to only occupy locations known to be FREE. For convenience, we will denote by 𝒳FREE(t)subscript𝒳FREE𝑡\mathcal{X}\_{\textrm{FREE}}(t)caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t ) the set of points x∈𝒳𝑥𝒳x\in\mathcal{X}italic\_x ∈ caligraphic\_X that are labelled as FREE at time t𝑡titalic\_t. We then have the following definitions. ###### Definition 1 (Safety) A planned trajectory ξ(⋅;t0,p,c(⋅))𝜉normal-⋅subscript𝑡0𝑝𝑐normal-⋅\xi\big{(}\cdot;t\_{0},p,c(\cdot)\big{)}italic\_ξ ( ⋅ ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p , italic\_c ( ⋅ ) ) is known at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to be safe, i.e. collision-free, if it satisfies the following criterion: | | | | | --- | --- | --- | | | ∀t≥t0,ϕℰ(ξ(t;t0,p,c(⋅)))⊆𝒳𝐹𝑅𝐸𝐸(t0).formulae-sequencefor-all𝑡subscript𝑡0subscriptitalic-ϕℰ𝜉𝑡subscript𝑡0𝑝𝑐⋅subscript𝒳𝐹𝑅𝐸𝐸subscript𝑡0\displaystyle\forall t\geq t\_{0},\phi\_{\mathcal{E}}\Big{(}\xi\big{(}t;t\_{0},p,c(\cdot)\big{)}\Big{)}\subseteq\mathcal{X}\_{\textrm{FREE}}(t\_{0})\enspace.∀ italic\_t ≥ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_ϕ start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( italic\_ξ ( italic\_t ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p , italic\_c ( ⋅ ) ) ) ⊆ caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) . | | Observe that Definition [1](#Thmdefinition1 "Definition 1 ‣ III-B Recursive Feasibility: Safety and Liveness ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") is *not* a statement about stability, as in e.g. [[19](#bib.bibx19)]. Dynamic stability is in fact neither a necessary nor a sufficient condition for safety understood as guaranteed collision (and failure) avoidance. ###### Definition 2 (Safe Reachable Set) The safe forward reachable set ΩFsubscriptnormal-Ω𝐹\Omega\_{F}roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT of a set of states ℒ⊆𝒫ℒ𝒫\mathcal{L}\subseteq\mathcal{P}caligraphic\_L ⊆ caligraphic\_P at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the set of states p′∈𝒫superscript𝑝normal-′𝒫p^{\prime}\in\mathcal{P}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_P that are known at t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to be safely reachable from ℒℒ\mathcal{L}caligraphic\_L under some control signal c(⋅)𝑐normal-⋅c(\cdot)italic\_c ( ⋅ ). | | | | | --- | --- | --- | | | ΩF(ℒ;t0):={p′∣∃p∈ℒ,∃t≥t0,∃c(⋅),∀τ∈[t0,t]:\displaystyle\Omega\_{F}(\mathcal{L};t\_{0}):=\Big{\{}p^{\prime}\mid\exists p\in\mathcal{L},\exists t\geq t\_{0},\exists c(\cdot),\forall\tau\in[t\_{0},t]:roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( caligraphic\_L ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) := { italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∣ ∃ italic\_p ∈ caligraphic\_L , ∃ italic\_t ≥ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , ∃ italic\_c ( ⋅ ) , ∀ italic\_τ ∈ [ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_t ] : | | | | ϕℰ(ξ(τ;t0,p,c(⋅)))⊆𝒳𝐹𝑅𝐸𝐸(t0),p′=ξ(t;t0,p,c(⋅))}.\displaystyle~{}\phi\_{\mathcal{E}}\Big{(}\xi\big{(}\tau;t\_{0},p,c(\cdot)\big{)}\Big{)}\subseteq\mathcal{X}\_{\textrm{FREE}}(t\_{0}),p^{\prime}=\xi\big{(}t;t\_{0},p,c(\cdot)\big{)}\Big{\}}\,.italic\_ϕ start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( italic\_ξ ( italic\_τ ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p , italic\_c ( ⋅ ) ) ) ⊆ caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_ξ ( italic\_t ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p , italic\_c ( ⋅ ) ) } . | | Analogously, the safe backward reachable set ΩBsubscriptnormal-Ω𝐵\Omega\_{B}roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT of ℒℒ\mathcal{L}caligraphic\_L at t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the set of states p′∈𝒫superscript𝑝normal-′𝒫p^{\prime}\in\mathcal{P}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_P from which ℒℒ\mathcal{L}caligraphic\_L is known at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to be safely reachable under some control signal (this can also be thought of as the set of states p′∈𝒫superscript𝑝normal-′𝒫p^{\prime}\in\mathcal{P}italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_P that can be safely reached from ℒℒ\mathcal{L}caligraphic\_L in backward time, hence the name *backward* reachable set): | | | | | --- | --- | --- | | | ΩB(ℒ;t0):={p′∣∃p∈ℒ,∃t≥t0,∃c(⋅),∀τ∈[t0,t]:\displaystyle\Omega\_{B}(\mathcal{L};t\_{0}):=\Big{\{}p^{\prime}\mid\exists p\in\mathcal{L},\exists t\geq t\_{0},\exists c(\cdot),\forall\tau\in[t\_{0},t]:roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( caligraphic\_L ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) := { italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∣ ∃ italic\_p ∈ caligraphic\_L , ∃ italic\_t ≥ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , ∃ italic\_c ( ⋅ ) , ∀ italic\_τ ∈ [ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_t ] : | | | | ϕℰ(ξ(τ;t0,p′,c(⋅)))⊆𝒳𝐹𝑅𝐸𝐸(t0),p=ξ(t;t0,p′,c(⋅))}.\displaystyle~{}\phi\_{\mathcal{E}}\Big{(}\xi\big{(}\tau;t\_{0},p^{\prime},c(\cdot)\big{)}\Big{)}\subseteq\mathcal{X}\_{\textrm{FREE}}(t\_{0}),p=\xi\big{(}t;t\_{0},p^{\prime},c(\cdot)\big{)}\Big{\}}\,.italic\_ϕ start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( italic\_ξ ( italic\_τ ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_c ( ⋅ ) ) ) ⊆ caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) , italic\_p = italic\_ξ ( italic\_t ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_c ( ⋅ ) ) } . | | We will often consider reachable sets of individual states; for conciseness, we will write ΩB(p;t0)subscriptnormal-Ω𝐵𝑝subscript𝑡0\Omega\_{B}(p;t\_{0})roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( italic\_p ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) rather than ΩB({p},t0)subscriptnormal-Ω𝐵𝑝subscript𝑡0\Omega\_{B}(\{p\},t\_{0})roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( { italic\_p } , italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ). We now proceed to define viability in terms of these sets. ###### Definition 3 (Viability) A state p𝑝pitalic\_p is *viable* at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT with respect to a goal state p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT and a home state pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT if at t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT it is known to be possible to reach either p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT or pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT from p𝑝pitalic\_p while remaining safe, i.e. p∈ΩB({p𝑔𝑜𝑎𝑙,pℎ𝑜𝑚𝑒};t0)𝑝subscriptnormal-Ω𝐵subscript𝑝𝑔𝑜𝑎𝑙subscript𝑝ℎ𝑜𝑚𝑒subscript𝑡0{p\in\Omega\_{B}\big{(}\{p\_{\textrm{goal}},p\_{\textrm{home}}\};t\_{0}\big{)}}italic\_p ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( { italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT } ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ). A trajectory ξ𝜉\xiitalic\_ξ is viable at t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT if all states along ξ𝜉\xiitalic\_ξ are viable at t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Note that a trajectory can be safe (Def. [1](#Thmdefinition1 "Definition 1 ‣ III-B Recursive Feasibility: Safety and Liveness ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")) but not viable. ###### Definition 4 (Safely Explorable Set) The safely explorable set 𝒫𝑆𝐸(p)⊂𝒫subscript𝒫𝑆𝐸𝑝𝒫\mathcal{P}\_{\textrm{SE}}(p)\subset\mathcal{P}caligraphic\_P start\_POSTSUBSCRIPT SE end\_POSTSUBSCRIPT ( italic\_p ) ⊂ caligraphic\_P of a state p𝑝pitalic\_p is the collection of states that can eventually be visited by the system through a trajectory starting at state p𝑝pitalic\_p with no prior knowledge of 𝒳𝒳\mathcal{X}caligraphic\_X whose states are, at each time t≥0𝑡0t\geq 0italic\_t ≥ 0, viable according to the known free space 𝒳𝐹𝑅𝐸𝐸(t)subscript𝒳𝐹𝑅𝐸𝐸𝑡\mathcal{X}\_{\textrm{FREE}}(t)caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t ). Based on the idea of the safely explorable set we can now introduce the important notion of *liveness* for the purposes of our work. ###### Definition 5 (Liveness) A state p𝑝pitalic\_p is live with respect to a goal state p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT if it is possible to reach p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT from p𝑝pitalic\_p while remaining in the safely explorable set for all time, i.e if p𝑔𝑜𝑎𝑙∈𝒫𝑆𝐸(p)subscript𝑝𝑔𝑜𝑎𝑙subscript𝒫𝑆𝐸𝑝{p\_{\textrm{goal}}\in\mathcal{P}\_{\textrm{SE}}(p)}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT ∈ caligraphic\_P start\_POSTSUBSCRIPT SE end\_POSTSUBSCRIPT ( italic\_p ). A trajectory ξ𝜉\xiitalic\_ξ is live if all states in ξ𝜉\xiitalic\_ξ are live. Finally, we will refer to a planning algorithm as *recursively feasible* if, given that the initial state p0subscript𝑝0p\_{0}italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is live, all future states p𝑝pitalic\_p are both live and viable. We will show that our proposed framework is recursively feasible. Moreover, we will also show that it is safely probabilistically complete, in the sense that, if p0subscript𝑝0p\_{0}italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is live with respect to pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT, then we will eventually reach pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT through continued guaranteed safe exploration, with probability 1. IV General Framework --------------------- ### IV-A Overview Our framework is comprised of two concurrent, asynchronous operations: building a graph of states which discretely under-approximate the forward and backward reachable sets of the initial “home” state, and traversing this graph to find recursively feasible trajectories. Namely, we define the graph 𝒢F:={V,E}assignsubscript𝒢𝐹𝑉𝐸\mathcal{G}\_{F}:=\{V,E\}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT := { italic\_V , italic\_E } of vertices V𝑉Vitalic\_V and edges E𝐸Eitalic\_E. Vertices are individual states in 𝒫𝒫\mathcal{P}caligraphic\_P, and directed edges are trajectories ξ𝜉\xiitalic\_ξ between pairs of vertices. 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT will be a discrete under-approximation of the current safe forward reachable set of the initial state phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. We also define the graph 𝒢B⊆𝒢Fsubscript𝒢𝐵subscript𝒢𝐹\mathcal{G}\_{B}\subseteq\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ⊆ caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT to contain only those vertices which are in the safe backward reachable set of phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT and pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT, and the corresponding edges. We use the notation p∈𝒢F𝑝subscript𝒢𝐹p\in\mathcal{G}\_{F}italic\_p ∈ caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT to mean that state p𝑝pitalic\_p is a vertex in 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT, and likewise for 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. We use following two facts extensively. They follow directly from the definitions above and our assumptions on deterministic sensing and a static environment. ###### Remark 1 (Permanence of Safety) A trajectory ξ𝜉\xiitalic\_ξ that is safe at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT will continue to be safe for all t≥t0𝑡subscript𝑡0{t\geq t\_{0}}italic\_t ≥ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. ###### Remark 2 (Permanence of Reachability) A state p𝑝pitalic\_p that is in the safe forward or backward reachable set of another state p0subscript𝑝0p\_{0}italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT at time t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT will continue to belong to this set for all t≥t0𝑡subscript𝑡0t\geq t\_{0}italic\_t ≥ italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, i.e. ΩF(p0;t0)⊆ΩF(p0;t)subscriptnormal-Ω𝐹subscript𝑝0subscript𝑡0subscriptnormal-Ω𝐹subscript𝑝0𝑡\Omega\_{F}(p\_{0};t\_{0})\subseteq\Omega\_{F}(p\_{0};t)roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ⊆ roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_t ) and ΩB(p0;t0)⊆ΩB(p0;t)subscriptnormal-Ω𝐵subscript𝑝0subscript𝑡0subscriptnormal-Ω𝐵subscript𝑝0𝑡\Omega\_{B}(p\_{0};t\_{0})\subseteq\Omega\_{B}(p\_{0};t)roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) ⊆ roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ; italic\_t ). ### IV-B Building the graph We incrementally build the graph by alternating between outbound expansion and inbound consolidation steps. In the outbound expansion step, new candidate states are sampled, and if possible, connected to 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT. This marks them as part of the forward reachable set of phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. In the inbound consolidation step, we attempt to find a safe trajectory from forward-reachable states in 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT back to a state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, which is known to be *viable*. Successful inbound consolidation marks a state as either able to reach pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT or safely return to phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. #### IV-B1 Outbound expansion This process incrementally expands a discrete under-approximation 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT of the forward reachable set of the home state, ΩF(phome;t)subscriptΩ𝐹subscript𝑝home𝑡\Omega\_{F}(p\_{\textrm{home}};t)roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t ). Note that, by Remark [2](#Thmremark2 "Remark 2 ‣ IV-A Overview ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), ΩF(phome;t)subscriptΩ𝐹subscript𝑝home𝑡\Omega\_{F}(p\_{\textrm{home}};t)roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t ) can only grow as the environment 𝒳𝒳\mathcal{X}caligraphic\_X is gradually explored over time and therefore any state p𝑝pitalic\_p added to 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT at a given time t𝑡titalic\_t is guaranteed to belong to ΩF(phome;t′)subscriptΩ𝐹subscript𝑝homesuperscript𝑡′\Omega\_{F}(p\_{\textrm{home}};t^{\prime})roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) for all t′≥tsuperscript𝑡′𝑡t^{\prime}\geq titalic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≥ italic\_t. We add states to 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT via a Monte Carlo sampling strategy inspired by existing graph-based kinodynamic planners [[24](#bib.bibx24)], illustrated in Fig. [1(a)](#S4.F1.sf1 "1(a) ‣ Figure 2 ‣ IV-B1 Outbound expansion ‣ IV-B Building the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"). We present a relatively simple strategy here, although more sophisticated options for sampling new states are possible, e.g. [[25](#bib.bibx25), [26](#bib.bibx26)]. Let pnewsubscript𝑝newp\_{\textrm{new}}italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT be sampled uniformly at random from 𝒫𝒫\mathcal{P}caligraphic\_P at time t𝑡titalic\_t such that ϕℰ(pnew)⊆𝒳FREE(t)subscriptitalic-ϕℰsubscript𝑝newsubscript𝒳FREE𝑡\phi\_{\mathcal{E}}(p\_{\textrm{new}})\subseteq\mathcal{X}\_{\textrm{FREE}}(t)italic\_ϕ start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT ) ⊆ caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t ). We wish to establish whether or not pnewsubscript𝑝newp\_{\textrm{new}}italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT is in the safe forward reachable set of home at t𝑡titalic\_t, i.e. pnew∈ΩF(phome;t)subscript𝑝newsubscriptΩ𝐹subscript𝑝home𝑡p\_{\textrm{new}}\in\Omega\_{F}(p\_{\textrm{home}};t)italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t ). This is accomplished by invoking a third-party motion planner, which will attempt to find a safe trajectory to pnewsubscript𝑝newp\_{\textrm{new}}italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT from any of the points already known to be in ΩF(phome;t)subscriptΩ𝐹subscript𝑝home𝑡\Omega\_{F}(p\_{\textrm{home}};t)roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t ). In Section [V](#S5 "V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), we use a standard kinodynamic planner from the OMPL [[23](#bib.bibx23)] for this purpose. ![Refer to caption](/html/1811.07834/assets/figures/outbound.png) (a) Outbound expansion. ![Refer to caption](/html/1811.07834/assets/figures/inbound.png) (b) Inbound consolidation. Figure 2: In outbound expansion (a), a new state is sampled from 𝒫𝒫\mathcal{P}caligraphic\_P and added to 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT if safely reachable from 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT. In inbound consolidation (b) a state in 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT is added to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT if it can safely reach a (viable) state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. We observe that repeatedly executing this procedure will, in the limit, result in a dense discrete under-approximation of ΩF(phome;t)subscriptΩ𝐹subscript𝑝home𝑡\Omega\_{F}(p\_{\textrm{home}};t)roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t ). Formally, assuming that the low-level planner will find a valid trajectory to a sampled state p𝑝pitalic\_p if one exists, then for any ϵ>0italic-ϵ0\epsilon>0italic\_ϵ > 0, we have that the probability that a new sampled state p′∈ΩF(phome,t)superscript𝑝′subscriptΩ𝐹subscript𝑝home𝑡p^{\prime}\in\Omega\_{F}(p\_{\textrm{home}},t)italic\_p start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT , italic\_t ) will lie within a distance of ϵitalic-ϵ\epsilonitalic\_ϵ from the nearest state p∈𝒢F𝑝subscript𝒢𝐹p\in\mathcal{G}\_{F}italic\_p ∈ caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT goes to 1111 in the limit of infinite samples. We formalize this observation below: ###### Lemma 1 (Dense Sampling) For all ϵ>0italic-ϵ0\epsilon>0italic\_ϵ > 0, assuming we sample candidate states p𝑝pitalic\_p uniformly and independently from 𝒫𝒫\mathcal{P}caligraphic\_P and 𝒫𝒫\mathcal{P}caligraphic\_P is compact, then letting pksubscript𝑝𝑘p\_{k}italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT be the k𝑘kitalic\_k-th sampled state from 𝒫𝒫\mathcal{P}caligraphic\_P we have that ∀tfor-all𝑡\forall t∀ italic\_t: | | | | | --- | --- | --- | | | limk→∞P(minp∈𝒢F⁡‖pk−p‖<ϵ∣pk∈ΩF(pℎ𝑜𝑚𝑒;t))=1.subscript→𝑘𝑃subscript𝑝subscript𝒢𝐹normsubscript𝑝𝑘𝑝braitalic-ϵsubscript𝑝𝑘subscriptΩ𝐹subscript𝑝ℎ𝑜𝑚𝑒𝑡1\displaystyle\lim\_{k\to\infty}P\big{(}\min\_{p\in\mathcal{G}\_{F}}\|p\_{k}-p\|<\epsilon\mid p\_{k}\in\Omega\_{F}(p\_{\textrm{home}};t)\big{)}=1\enspace.roman\_lim start\_POSTSUBSCRIPT italic\_k → ∞ end\_POSTSUBSCRIPT italic\_P ( roman\_min start\_POSTSUBSCRIPT italic\_p ∈ caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ∥ italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT - italic\_p ∥ < italic\_ϵ ∣ italic\_p start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t ) ) = 1 . | | ###### Proof: This follows directly from the properties of uniform sampling from compact sets. ∎ This will be useful in proving the safe probabilistic completeness of our framework. #### IV-B2 Inbound consolidation This process incrementally adds states in 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT to a discrete approximation 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT of the safe *backward* reachable set of {phome,pgoal}subscript𝑝homesubscript𝑝goal\{p\_{\textrm{home}},p\_{\textrm{goal}}\}{ italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT }. By Definition [3](#Thmdefinition3 "Definition 3 ‣ III-B Recursive Feasibility: Safety and Liveness ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), any state added to this set is *viable*, which means that a trajectory will always exist from it to either pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT or phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. This is a crucial element of our overall guarantee of recursive feasibility. We recall that 𝒢B⊆𝒢Fsubscript𝒢𝐵subscript𝒢𝐹\mathcal{G}\_{B}\subseteq\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ⊆ caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT. Suppose that p∈𝒢F∖𝒢B𝑝subscript𝒢𝐹subscript𝒢𝐵p\in\mathcal{G}\_{F}\setminus\mathcal{G}\_{B}italic\_p ∈ caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ∖ caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. We will attempt to add p𝑝pitalic\_p to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT by finding a safe trajectory from p𝑝pitalic\_p to any of the states currently in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT by invoking the low-level motion planner. If we succeed in finding such a trajectory, then by construction there exists a trajectory all the way to phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT, so we add p𝑝pitalic\_p to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. If p𝑝pitalic\_p is added to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, we also add all of its ancestors in 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, since there now exists a trajectory from each ancestor through p𝑝pitalic\_p to either phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT or pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT. This procedure is illustrated in Fig. [1(b)](#S4.F1.sf2 "1(b) ‣ Figure 2 ‣ IV-B1 Outbound expansion ‣ IV-B Building the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"). ### IV-C Exploring the graph When requested, we must be able to supply a safe trajectory beginning at the current state reference p(t)𝑝𝑡p(t)italic\_p ( italic\_t ) tracked by the system. Recall from Section [III-A](#S3.SS1 "III-A Preliminaries ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") that under the robust tracking framework [[1](#bib.bibx1)], the physical system’s state s(t)𝑠𝑡s(t)italic\_s ( italic\_t ) is guaranteed to remain within an error bound ℰℰ\mathcal{E}caligraphic\_E of p(t)𝑝𝑡p(t)italic\_p ( italic\_t ) measured on the planning state space 𝒫𝒫\mathcal{P}caligraphic\_P. This property allows us to make guarantees in terms of planning model states p𝑝pitalic\_p rather than full physical system states s𝑠sitalic\_s. Trajectories ξ𝜉\xiitalic\_ξ output by our framework must guarantee future safety for all time; that is, as the system follows ξ𝜉\xiitalic\_ξ we must always be able to find a safe trajectory starting from any future state. In addition, we require that phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT remains safely reachable throughout the trajectory; this ensures that *liveness* is preserved (if it was possible from phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT to safely explore 𝒳𝒳\mathcal{X}caligraphic\_X and reach pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT then this possibility will not be lost by embarking on ξ𝜉\xiitalic\_ξ). Note that liveness is an important property separate from safety: a merely safe planner may eventually trap the system in a periodic safe orbit that it cannot safely exit. By construction, any cycle in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT is safe for all future times (Remark [1](#Thmremark1 "Remark 1 ‣ IV-A Overview ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")). Readily, this suggests that we could guarantee perpetual recursive feasibility by always returning the same cycle. However, this naive strategy would never reach the goal. Moreover, it would not incrementally explore the environment. In order to force the system to explore unknown regions of 𝒳𝒳\mathcal{X}caligraphic\_X, we modify this naive strategy by routing the system through a randomly selected *unvisited* state pnew∈𝒢Bsubscript𝑝newsubscript𝒢𝐵p\_{\textrm{new}}\in\mathcal{G}\_{B}italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT ∈ caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, and then back to phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. The trajectory always ends in a periodic safe orbit between pnewsubscript𝑝newp\_{\textrm{new}}italic\_p start\_POSTSUBSCRIPT new end\_POSTSUBSCRIPT and phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. Note that this random selection does not need to be done naively (e.g. by uniform sampling of unvisited states in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT), and efficient exploration strategies are certainly possible. In our examples we will use an ϵitalic-ϵ\epsilonitalic\_ϵ-greedy sampling heuristic by which, with probability 1−ϵ1italic-ϵ1-\epsilon1 - italic\_ϵ, we select the unvisited p∈𝒢B𝑝subscript𝒢𝐵p\in\mathcal{G}\_{B}italic\_p ∈ caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT closest to pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT, and otherwise, with probability ϵitalic-ϵ\epsilonitalic\_ϵ, we uniformly sample an unvisited state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. Of course, if pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT is ever added to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, we may simply return a trajectory from the current state p(t)𝑝𝑡p(t)italic\_p ( italic\_t ) to pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT. This will always be possible because, by construction, every state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT is safely reachable from every other state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT (if necessary, looping through phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT). ### IV-D Algorithm summary To summarize, our framework maintains graph representations of the forward reachable set of phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT and the backward reachable set of {phome,pgoal}subscript𝑝homesubscript𝑝goal\{p\_{\textrm{home}},p\_{\textrm{goal}}\}{ italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT }. Over time, these graphs become increasingly dense (Lemma [1](#Thmlemma1 "Lemma 1 ‣ IV-B1 Outbound expansion ‣ IV-B Building the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")). Additionally, all output trajectories terminate at pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT or in a cycle that includes phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. This implies our main theoretical result: ###### Theorem 1 (Recursive Feasibility) Assuming that we are able to generate an initial viable trajectory (e.g. a loop through pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT), all subsequently generated trajectories will be viable and preserve the liveness of pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. Thus, our framework guarantees *recursive feasibility*. ###### Proof: By assumption, the initial trajectory ξ0subscript𝜉0\xi\_{0}italic\_ξ start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT output at t0subscript𝑡0t\_{0}italic\_t start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is safe (Definition [1](#Thmdefinition1 "Definition 1 ‣ III-B Recursive Feasibility: Safety and Liveness ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")). We now proceed by induction: assume that the i𝑖iitalic\_i-th reference trajectory ξisubscript𝜉𝑖\xi\_{i}italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is viable for the knowledge of free space at the time tisubscript𝑡𝑖t\_{i}italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT at which it was generated, i.e. ∀t≥ti,ξi(t)∈ΩB({pℎ𝑜𝑚𝑒,p𝑔𝑜𝑎𝑙};ti)formulae-sequencefor-all𝑡subscript𝑡𝑖subscript𝜉𝑖𝑡subscriptΩ𝐵subscript𝑝ℎ𝑜𝑚𝑒subscript𝑝𝑔𝑜𝑎𝑙subscript𝑡𝑖\forall t\geq t\_{i},\xi\_{i}(t)\in\Omega\_{B}(\{p\_{\textrm{home}},p\_{\textrm{goal}}\};t\_{i})∀ italic\_t ≥ italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_t ) ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( { italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT } ; italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ). Assuming p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT has not been reached yet at the time of the next planning request, ti+1subscript𝑡𝑖1t\_{i+1}italic\_t start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT, a new trajectory will be generated from initial state ξi(ti+1)subscript𝜉𝑖subscript𝑡𝑖1\xi\_{i}(t\_{i+1})italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_t start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ). The new trajectory ξi+1subscript𝜉𝑖1\xi\_{i+1}italic\_ξ start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT will be created by concatenating safe trajectories between states in 𝒢B⊆ΩB({pℎ𝑜𝑚𝑒,p𝑔𝑜𝑎𝑙};ti)subscript𝒢𝐵subscriptΩ𝐵subscript𝑝ℎ𝑜𝑚𝑒subscript𝑝𝑔𝑜𝑎𝑙subscript𝑡𝑖\mathcal{G}\_{B}\subseteq\Omega\_{B}(\{p\_{\textrm{home}},p\_{\textrm{goal}}\};t\_{i})caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ⊆ roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( { italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT , italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT } ; italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) and therefore will be a viable trajectory. Such a trajectory can always be found, because it is always possible to choose ξi+1≡ξisubscript𝜉𝑖1subscript𝜉𝑖\xi\_{i+1}\equiv\xi\_{i}italic\_ξ start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ≡ italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, which, by the inductive hypothesis was a viable trajectory at time tisubscript𝑡𝑖t\_{i}italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and, by Remark [2](#Thmremark2 "Remark 2 ‣ IV-A Overview ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), continues to be viable at ti+1subscript𝑡𝑖1t\_{i+1}italic\_t start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT. Therefore all planned trajectories ξisubscript𝜉𝑖\xi\_{i}italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT will retain the ability to either safely reach p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT or safely return to pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT. In the former case, ξisubscript𝜉𝑖\xi\_{i}italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is immediately live (and since ∀t≥0,ξi(t)∈ΩF(pℎ𝑜𝑚𝑒;ti)formulae-sequencefor-all𝑡0subscript𝜉𝑖𝑡subscriptΩ𝐹subscript𝑝ℎ𝑜𝑚𝑒subscript𝑡𝑖\forall t\geq 0,\xi\_{i}(t)\in\Omega\_{F}(p\_{\textrm{home}};t\_{i})∀ italic\_t ≥ 0 , italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_t ) ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT must have been live too); in the latter, ξisubscript𝜉𝑖\xi\_{i}italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT will inherit the liveness of pℎ𝑜𝑚𝑒subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT, by observing that ∀t≥0,ξi(t)∈ΩB(pℎ𝑜𝑚𝑒;ti)formulae-sequencefor-all𝑡0subscript𝜉𝑖𝑡subscriptΩ𝐵subscript𝑝ℎ𝑜𝑚𝑒subscript𝑡𝑖\forall t\geq 0,\xi\_{i}(t)\in\Omega\_{B}(p\_{\textrm{home}};t\_{i})∀ italic\_t ≥ 0 , italic\_ξ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_t ) ∈ roman\_Ω start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ; italic\_t start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ). ∎ ###### Corollary 1 (Dynamical System Exploration) Given that the safety of trajectories is evaluated using the robust footprint ϕℰ(⋅)subscriptitalic-ϕℰnormal-⋅\phi\_{\mathcal{E}}(\cdot)italic\_ϕ start\_POSTSUBSCRIPT caligraphic\_E end\_POSTSUBSCRIPT ( ⋅ ), and the relative state between the dynamical system and the planning reference is guaranteed to be contained in ℰℰ\mathcal{E}caligraphic\_E, Theorem [1](#Thmtheorem1 "Theorem 1 ‣ IV-D Algorithm summary ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") implies that the dynamical system can continually execute safe trajectories in the environment. Moreover, we ensure that each output trajectory visits an unexplored state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, which implies that 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT approaches the safely explorable set 𝒫SE(phome)subscript𝒫SEsubscript𝑝home\mathcal{P}\_{\textrm{SE}}(p\_{\textrm{home}})caligraphic\_P start\_POSTSUBSCRIPT SE end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ) from Definition [4](#Thmdefinition4 "Definition 4 ‣ III-B Recursive Feasibility: Safety and Liveness ‣ III Problem Formulation ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"). Together with Theorem [1](#Thmtheorem1 "Theorem 1 ‣ IV-D Algorithm summary ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), this implies the following completeness result: ###### Theorem 2 (Safe Probabilistic Completeness) In the limit of infinite runtime, our framework eventually finds the goal with probability 1111 if it is within the safely explorable set. ###### Proof: By Theorem [1](#Thmtheorem1 "Theorem 1 ‣ IV-D Algorithm summary ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), all trajectories output will be viable; hence, the autonomous system will remain safe for all time (Corollary [1](#Thmcorollary1 "Corollary 1 ‣ IV-D Algorithm summary ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")). Further, since each generated trajectory visits a previously unvisited state in 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT with nonzero probability, by Lemma [1](#Thmlemma1 "Lemma 1 ‣ IV-B1 Outbound expansion ‣ IV-B Building the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") it will eventually observe new regions in the safely explorable set 𝒫𝑆𝐸(pℎ𝑜𝑚𝑒)subscript𝒫𝑆𝐸subscript𝑝ℎ𝑜𝑚𝑒\mathcal{P}\_{\textrm{SE}}(p\_{\textrm{home}})caligraphic\_P start\_POSTSUBSCRIPT SE end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ) if any exist. Moreover, those regions will eventually be sampled, added to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, and visited by subsequent trajectories. Because we have assumed all sets of interest to be bounded, this implies that we will eventually add p𝑔𝑜𝑎𝑙subscript𝑝𝑔𝑜𝑎𝑙p\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT to 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT as long as p𝑔𝑜𝑎𝑙∈𝒫𝑆𝐸(pℎ𝑜𝑚𝑒)subscript𝑝𝑔𝑜𝑎𝑙subscript𝒫𝑆𝐸subscript𝑝ℎ𝑜𝑚𝑒p\_{\textrm{goal}}\in\mathcal{P}\_{\textrm{SE}}(p\_{\textrm{home}})italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT ∈ caligraphic\_P start\_POSTSUBSCRIPT SE end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ). ∎ ### IV-E Remarks We conclude this section with several brief remarks regarding implementation. In Sec. [IV-B](#S4.SS2 "IV-B Building the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), we specify that states should be connected to existing states in 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT and 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. In practice, we find that connecting to one of the k𝑘kitalic\_k-nearest neighbors (measured in the Euclidean norm over 𝒫𝒫\mathcal{P}caligraphic\_P) in the appropriate graph suffices. In Sec. [IV-C](#S4.SS3 "IV-C Exploring the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), we describe traversing 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT to find safe trajectories between vertices. For efficiency, we recommend maintaining the following at each vertex: cost-from-home, cost-to-home, and cost-to-goal, where cost may be any consistent metric on trajectories (e.g. duration). If these quantities are maintained, then care must be taken to update them appropriately for descendants and ancestors of states that are added to 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT and 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT in Sec. [IV-B](#S4.SS2 "IV-B Building the graph ‣ IV General Framework ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"). Finally, we observe that outbound expansion, inbound consolidation, and graph exploration may all be performed in parallel and asynchronously. V Example ---------- We demonstrate our framework in a real-time simulation, implemented within the Robot Operating System (ROS) software environment [[27](#bib.bibx27)]. ### V-A Setup Let the high-order system dynamics be given by the following 6D model: | | | | | | --- | --- | --- | --- | | | s˙=[x˙v˙xy˙v˙yz˙v˙z]=[vxgcos⁡u1vy−gsin⁡u2vzu3−g]˙𝑠matrix˙𝑥subscript˙𝑣𝑥˙𝑦subscript˙𝑣𝑦˙𝑧subscript˙𝑣𝑧matrixsubscript𝑣𝑥𝑔subscript𝑢1subscript𝑣𝑦𝑔subscript𝑢2subscript𝑣𝑧subscript𝑢3𝑔\displaystyle\dot{s}=\begin{bmatrix}\dot{x}\\ \dot{v}\_{x}\\ \dot{y}\\ \dot{v}\_{y}\\ \dot{z}\\ \dot{v}\_{z}\end{bmatrix}=\begin{bmatrix}v\_{x}\\ g\cos u\_{1}\\ v\_{y}\\ -g\sin u\_{2}\\ v\_{z}\\ u\_{3}-g\end{bmatrix}over˙ start\_ARG italic\_s end\_ARG = [ start\_ARG start\_ROW start\_CELL over˙ start\_ARG italic\_x end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_v end\_ARG start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_y end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_v end\_ARG start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_z end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_v end\_ARG start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARG ] = [ start\_ARG start\_ROW start\_CELL italic\_v start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_g roman\_cos italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_v start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL - italic\_g roman\_sin italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_v start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT - italic\_g end\_CELL end\_ROW end\_ARG ] | | (13) | where g𝑔gitalic\_g is acceleration due to gravity, the states are position and velocity in (x,y,z)𝑥𝑦𝑧(x,y,z)( italic\_x , italic\_y , italic\_z ), and the controls are u1=pitch,u2=roll,andu3=thrust accelerationformulae-sequencesubscript𝑢1pitchformulae-sequencesubscript𝑢2rollandsubscript𝑢3thrust accelerationu\_{1}=\textrm{pitch},u\_{2}=\textrm{roll},~{}\textrm{and}~{}u\_{3}=\textrm{thrust acceleration}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = pitch , italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = roll , and italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT = thrust acceleration. These dynamics are a reasonably accurate model for a lightweight quadrotor operating near a hover and at zero yaw. We consider the following lower-order 3D dynamical model for planning: | | | | | | --- | --- | --- | --- | | | p˙=[x˙y˙θ˙]=[vcos⁡θvsin⁡θc]˙𝑝matrix˙𝑥˙𝑦˙𝜃matrix𝑣𝜃𝑣𝜃𝑐\displaystyle\dot{p}=\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{\theta}\end{bmatrix}=\begin{bmatrix}v\cos\theta\\ v\sin\theta\\ c\end{bmatrix}over˙ start\_ARG italic\_p end\_ARG = [ start\_ARG start\_ROW start\_CELL over˙ start\_ARG italic\_x end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_y end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_θ end\_ARG end\_CELL end\_ROW end\_ARG ] = [ start\_ARG start\_ROW start\_CELL italic\_v roman\_cos italic\_θ end\_CELL end\_ROW start\_ROW start\_CELL italic\_v roman\_sin italic\_θ end\_CELL end\_ROW start\_ROW start\_CELL italic\_c end\_CELL end\_ROW end\_ARG ] | | (20) | where v𝑣vitalic\_v is a constant tangential speed in the Frenet frame, states are absolute heading θ𝜃\thetaitalic\_θ, and (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ) position in fixed frame, and control c𝑐citalic\_c is the turning rate. We interpret these dynamics as a Dubins car operating at a fixed z𝑧zitalic\_z height zpsubscript𝑧pz\_{\textrm{p}}italic\_z start\_POSTSUBSCRIPT p end\_POSTSUBSCRIPT. We take controls to be bounded in all dimensions independently by known constants: u∈[u¯1,u¯1]×[u¯2,u¯2]×[u¯3,u¯3]𝑢subscript¯𝑢1subscript¯𝑢1subscript¯𝑢2subscript¯𝑢2subscript¯𝑢3subscript¯𝑢3u\in[\underline{u}\_{1},\overline{u}\_{1}]\times[\underline{u}\_{2},\overline{u}\_{2}]\times[\underline{u}\_{3},\overline{u}\_{3}]italic\_u ∈ [ under¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ] × [ under¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ] × [ under¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ] and c∈[c¯,c¯]𝑐¯𝑐¯𝑐c\in[\underline{c},\overline{c}]italic\_c ∈ [ under¯ start\_ARG italic\_c end\_ARG , over¯ start\_ARG italic\_c end\_ARG ]. In order to compute the FaSTrack tracking error bound ℰℰ\mathcal{E}caligraphic\_E, we must solve a Hamilton Jacobi (HJ) reachability problem for the *relative dynamics* defined by ([13](#S5.E13 "13 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")) and ([20](#S5.E20 "20 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")). In this case, the relative dynamics are given by: | | | | | | --- | --- | --- | --- | | | r˙=[d˙ψ˙v˙Tv˙N]=[vTcos⁡ψ+vNsin⁡ψ−c−vTsin⁡ψ+vNcos⁡ψu1cos⁡θ−u2sin⁡θ+cvT−u1sin⁡θ−u2cos⁡θ−cvT]˙𝑟matrix˙𝑑˙𝜓subscript˙𝑣𝑇subscript˙𝑣𝑁matrixsubscript𝑣𝑇𝜓subscript𝑣𝑁𝜓𝑐subscript𝑣𝑇𝜓subscript𝑣𝑁𝜓subscript𝑢1𝜃subscript𝑢2𝜃𝑐subscript𝑣𝑇subscript𝑢1𝜃subscript𝑢2𝜃𝑐subscript𝑣𝑇\displaystyle\dot{r}=\begin{bmatrix}\dot{d}\\ \dot{\psi}\\ \dot{v}\_{T}\\ \dot{v}\_{N}\end{bmatrix}=\begin{bmatrix}v\_{T}\cos\psi+v\_{N}\sin\psi\\ -c-v\_{T}\sin\psi+v\_{N}\cos\psi\\ u\_{1}\cos\theta-u\_{2}\sin\theta+cv\_{T}\\ -u\_{1}\sin\theta-u\_{2}\cos\theta-cv\_{T}\end{bmatrix}over˙ start\_ARG italic\_r end\_ARG = [ start\_ARG start\_ROW start\_CELL over˙ start\_ARG italic\_d end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_ψ end\_ARG end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_v end\_ARG start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL over˙ start\_ARG italic\_v end\_ARG start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARG ] = [ start\_ARG start\_ROW start\_CELL italic\_v start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT roman\_cos italic\_ψ + italic\_v start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT roman\_sin italic\_ψ end\_CELL end\_ROW start\_ROW start\_CELL - italic\_c - italic\_v start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT roman\_sin italic\_ψ + italic\_v start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT roman\_cos italic\_ψ end\_CELL end\_ROW start\_ROW start\_CELL italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT roman\_cos italic\_θ - italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT roman\_sin italic\_θ + italic\_c italic\_v start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL - italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT roman\_sin italic\_θ - italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT roman\_cos italic\_θ - italic\_c italic\_v start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARG ] | | (29) | with the relative states d𝑑ditalic\_d (distance), ψ𝜓\psiitalic\_ψ (bearing), vTsubscript𝑣𝑇v\_{T}italic\_v start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT (tangential velocity), and vNsubscript𝑣𝑁v\_{N}italic\_v start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT (normal velocity) illustrated in Fig. [2(a)](#S5.F2.sf1 "2(a) ‣ Figure 3 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"). ![Refer to caption](/html/1811.07834/assets/figures/relative_state.png) (a) Relative states. ![Refer to caption](/html/1811.07834/assets/figures/value_function.png) (b) Computed value function. Figure 3: (a) Relative states for 6D near-hover quadrotor tracking 3D Dubins car. (b) Minimum value over vTsubscript𝑣𝑇v\_{T}italic\_v start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT and vNsubscript𝑣𝑁v\_{N}italic\_v start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT, for each relative (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ) position in the planner’s Frenet frame. Any non-empty sublevel set can be used as a tracking error bound ℰℰ\mathcal{E}caligraphic\_E. Fig. [2(b)](#S5.F2.sf2 "2(b) ‣ Figure 3 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") is a 3D rendering of the FaSTrack value function [[1](#bib.bibx1)] computed using level set methods [[28](#bib.bibx28)]. The value function records the maximum relative distance between the high- and low-order dynamical models (i.e. d𝑑ditalic\_d). In order to guarantee that, at run-time, the distance between the two systems does not exceed this value, the value function is computed by solving a differential game in which c(⋅)𝑐⋅c(\cdot)italic\_c ( ⋅ ) seeks to maximize the relative distance and u(⋅)𝑢⋅u(\cdot)italic\_u ( ⋅ ) seeks to minimize it. Observe in Fig. [2(b)](#S5.F2.sf2 "2(b) ‣ Figure 3 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") that level sets of the value function with sufficiently high value are well-approximated by discs centered on the origin in (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ). Thus, we approximate the TEB ℰℰ\mathcal{E}caligraphic\_E by such a disc for rapid collision-checking during each call to the low-level motion planner. Since the high-order dynamics ([13](#S5.E13 "13 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")) do allow for variation in z𝑧zitalic\_z, we also incorporate a z𝑧zitalic\_z extent for ℰℰ\mathcal{E}caligraphic\_E which may be obtained by solving a similar differential game in the (z,vz)𝑧subscript𝑣𝑧(z,v\_{z})( italic\_z , italic\_v start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT ) subsystem of ([13](#S5.E13 "13 ‣ V-A Setup ‣ V Example ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments")), as in [[4](#bib.bibx4)]. We use the KPIECE1 kinodynamic planner [[29](#bib.bibx29)] within the Open Motion Planning Library (OMPL) [[23](#bib.bibx23)] to plan all trajectories for the low-level dynamics while building the graphs 𝒢Fsubscript𝒢𝐹\mathcal{G}\_{F}caligraphic\_G start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT and 𝒢Bsubscript𝒢𝐵\mathcal{G}\_{B}caligraphic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. For simplicity, we model static obstacles as spheres in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard\_R start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT and use an omnidirectional sensing model in which all obstacles within a fixed range of the vehicle are sensed exactly. We emphasize that these choices of environment and sensing models are deliberately simplified in order to more clearly showcase our framework. The framework itself is fully compatible with arbitrary representations of static obstacles and deterministic sensing models. Extensions to dynamic obstacles and probabilistic sensing are promising directions for future research. ### V-B Simulation Results We demonstrate our framework in a simple simulated environment, shown in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"), designed to illustrate the importance of maintaining recursive feasibility. This simulation is intended as a proof of concept; our central contribution is theoretical and applies to a range of planning problems.111Video summary available at: <https://youtu.be/GKQwFxdJWSA> Observe in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments") that our method avoids collision where a non-recursively-feasible approach would likely fail. Here, the goal is directly in front of the home position and the way there *appears* to be in 𝒳FREE(t)subscript𝒳FREE𝑡\mathcal{X}\_{\textrm{FREE}}(t)caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t ). However, just beyond our sensor’s field of view ℱℱ\mathcal{F}caligraphic\_F, there is a narrow dead end. Many standard planning techniques would either optimistically assume the unknown regions of the environment are free space, or plan in a receding horizon within known free space 𝒳FREE(t)subscript𝒳FREE𝑡\mathcal{X}\_{\textrm{FREE}}(t)caligraphic\_X start\_POSTSUBSCRIPT FREE end\_POSTSUBSCRIPT ( italic\_t ). In both cases, the planner would tend to guide the system into the narrow dead end, leading to a crash (recall that the planner’s speed v𝑣vitalic\_v is fixed). By contrast, our approach eventually takes a more circuitous—but recursively feasible—route to the goal. The evolution of planned viable trajectories is shown on the right in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Safely Probabilistically Complete Real-Time Planning and Exploration in Unknown Environments"). Initially, we plan tight loops near phomesubscript𝑝homep\_{\textrm{home}}italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT, but over time we visit more of the safely explorable space 𝒫SE(phome)subscript𝒫SEsubscript𝑝home\mathcal{P}\_{\textrm{SE}}(p\_{\textrm{home}})caligraphic\_P start\_POSTSUBSCRIPT SE end\_POSTSUBSCRIPT ( italic\_p start\_POSTSUBSCRIPT home end\_POSTSUBSCRIPT ), and eventually we find pgoalsubscript𝑝goalp\_{\textrm{goal}}italic\_p start\_POSTSUBSCRIPT goal end\_POSTSUBSCRIPT. VI Discussion & Conclusion --------------------------- In this paper, we have introduced a novel framework for recursively feasible motion planning for dynamical systems. Our approach is based on the ideas of forward and backward reachability, and uses FaSTrack [[1](#bib.bibx1)] to make a strong guarantee of safety over all time. Moreover, we also guarantee that if the initial “home” state is live, i.e. the goal is safely explorable from the home state, then each state along all motion plans will also be live, and eventually we will find a trajectory to the goal. To our knowledge, this is the first motion planning algorithm to make this guarantee of recursive feasibility. As such, we have presented it as generally as possible and without optimization. While we make no claims of optimality, we do believe that many of the advances in optimal sample-based planning could be readily applied to our work. We are also sanguine about implementing our work in hardware for different, more complicated dynamical systems.
7b00a605-f864-4fe4-a8c0-5526d9e528c5
StampyAI/alignment-research-dataset/blogs
Blogs
New eBook: ‘Smarter Than Us’ [![SmarterThanUsCover-Angled-200px](https://intelligence.org/wp-content/uploads/2014/02/SmarterThanUsCover-Angled-200px.jpg)](https://intelligence.org/smarter-than-us/)We are pleased to release a new ebook, commissioned by MIRI and written by Oxford University’s [Stuart Armstrong](http://www.fhi.ox.ac.uk/about/staff/), and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores. What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit. Head over to [intelligence.org/smarter-than-us/](http://intelligence.org/smarter-than-us/) to grab a copy. The post [New eBook: ‘Smarter Than Us’](https://intelligence.org/2014/02/17/new-ebook-smarter-than-us/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
412db6ab-8656-4b1e-9ce2-06e7770133d1
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
The Top AI Safety Bets for 2023: GiveWiki’s Latest Recommendations | | | --- | | **Summary:** The AI Safety [GiveWiki](https://givewiki.org/) (formerly Impact Markets) has completed its third round of retroactive impact evaluations – just in time to provide updated recommendations for the giving season! [Here is a reminder of how the platform works.](https://www.youtube.com/watch?v=MInKrUV9TVY) Want to donate? Open up the page of our [top project/s](https://ai.givewiki.org/projects), double-check that they are still fundraising, and ka-ching![Interested in regranting? Check out our post on the (now) $700,000 that want to be allocated.](https://forum.effectivealtruism.org/posts/zxxew56gnYhYEupsc/regrant-up-to-usd600-000-with-givewiki) | Top Projects ============ Our top projects stand out by virtue of their high support scores. There are a lot of ties between these top projects, so we’ve categorized them into tiers. Note that we determine the top projects according to their *support*. Further down we’ll cover how our latest evaluation round worked out. But the support scores are two hops removed from those results: (1) **Projects receive support** in the form of donations as a function of donation size, earliness, and the score of the donor; (2) **donors get their scores** as a function of size and earliness of their donations and the scores of the beneficiary projects; (3) **projects receive their credits** from our evaluators: **Project credits → donor scores → project support**. This mimics the price discovery process of a for-profit impact market. Hence it’s also likely that the scores are slightly different by the time you read this article because someone may have entered fresh donation data into the platform. We have tried to find and reach out to every notable AI safety project, but some may yet be missing from our list because (1) they haven’t heard of us after all, (2) they’re not fundraising from the public, (3) they prefer to keep a low profile, (4) etc. But at the time of writing, we have 106 projects on the platform that are publicly visible and fundraising. Ties for Tier 1 --------------- [![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/bEe4nRbShq8sWEE7n/dpdaqcvaibanqleuosz2)](https://ai.givewiki.org/project/clf3grk5l00002x6qoxlgowlj) [![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/bEe4nRbShq8sWEE7n/igkv0jzqdzjnyympfu1e)](https://ai.givewiki.org/project/cljrhft0f000o2v74nyzsexvi) These are the projects at the very top! [FAR AI](https://ai.givewiki.org/project/clf3grk5l00002x6qoxlgowlj) and the [Simon Institute](https://ai.givewiki.org/project/cljrhft0f000o2v74nyzsexvi) with a support score of (at the time of writing) 213. Ties for Tier 2 --------------- 1. [Alignment Research Center](https://ai.givewiki.org/project/cljrijwfa00142v74tpdq6ly1) 2. [AI Safety Support](https://ai.givewiki.org/project/clfaq5n2h00002e6p8b0jwq2u) 3. [Rational Animations](https://ai.givewiki.org/project/cljrbkkum00002v6smbutc8lg) 4. [Ought](https://ai.givewiki.org/project/aad0640c-edc2-4414-9379-6d7bea9f5099) 5. [AI Impacts](https://ai.givewiki.org/project/5158a505-aad3-4f50-91a4-828a026f584e) They all have a support score of 212. Such small differences in support are probably quite uninformative. New data or tweaks to our algorithm could easily change their rank. Ties for Tier 3 --------------- 1. [Center for AI Safety](https://ai.givewiki.org/project/cljrp1xh5001g2v74t4wjumx0) 2. [Alignment Jams (of Apart Research)](https://ai.givewiki.org/project/cljpo59se00032v74rfo6muun) Other projects with > 200 support --------------------------------- 1. [Pour Demain](https://ai.givewiki.org/project/cldlx7uc100003d6rjfh4x4ep) 2. [Center for Reducing Suffering](https://ai.givewiki.org/project/cldmipqrc00003d6qy4dq197v) 3. [Center on Long-Term Risk](https://ai.givewiki.org/project/8fe79065-ac60-43e6-9e00-7fc7bcafcf68) 4. [Future of Humanity Institute](https://ai.givewiki.org/project/f03aa263-f86e-454b-80fc-9ce572a94b50) 5. [Centre for Enabling EA Learning and Research (EA Hotel)](https://ai.givewiki.org/project/clje2vzgw00003573tfrn67st) 6. [Faunalytics](https://ai.givewiki.org/project/clh0uqdop00003b6o8h19ut3t) 7. [Rethink Priorities](https://ai.givewiki.org/project/91b6bf21-52c5-46cb-bf73-05ee5d315e18) 8. [Global Catastrophic Risk Institute](https://ai.givewiki.org/project/55523206-9abf-4aa6-b05c-22c3b1af66b6) 9. [Legal Priorities Project](https://ai.givewiki.org/project/clhoxb0b800003b6ri8x2dwlm) Note that, while we now market the platform to AI safety, really any project can use it and some may even fare well! We may introduce other specialized GiveWikis in the future. If you’re just here for the results then this is where you can stop reading. Evaluation Process ================== Preliminaries ------------- For this evaluation round, we recruited Charbel-Raphael Segerie, Dima Krasheninnikov, Gurkenglas, Imma Six, Konrad Seifert, Linda Linsefors, Magdalena Wache, Mikhail Samin, Plex, and Steven Kaas as evaluators. Matt Brooks, Frankie Parise, and I may also have pitched in. Some of them ended up not having time for the evaluation. But some of our communication was under the [Chatham House Rule](https://www.chathamhouse.org/about-us/chatham-house-rule), so I’m listing them anyway for added anonymity. Our [detailed instructions](https://docs.google.com/document/d/1tCD66uTEyXXtb-OjKp3VkKjlysh6vCxtIF2tUPVSaUc/edit) included provisions for how to score project outputs according to quality and impact; how to avoid anchoring on other evaluators; how to select artifacts to strike a compromise between comprehensiveness, redundancy, and time investment; how to evaluate projects using wiki credits; and some tips and arrangements. *Outputs* are such things as the papers or hackathons that organizations put out. They can create one project per output on our platform, or they can create one project for the whole organization. Conferences cannot be directly evaluated after the fact, so what our evaluators considered were *artifacts*, such as recordings or attendance statistics. This distinction makes less sense for papers. The projects were selected from among the projects that had signed up to our website (though in some cases I had helped out with that), limited to those with smaller annual budgets (in the five or lower six digits, according to rough estimates) and those that were accepting donations. The set of outputs was limited to those from 2023 in most cases to keep them relevant to the current work of the project, if any. We made a few exceptions if there were too few outputs from 2023 and there were older, representative outputs. We hadn’t run an evaluation round at this scale. Previously we were three and could just have a call to sync up. This time everything needed to be more parallelizable. Hence we followed a two-pronged approach with (1) evaluations of individual outputs using scores, and (2) evaluations of the AI safety activities of whole projects using our wiki credits. If one kind of evaluation fell short, we had another to fall back on. Lessons Learned --------------- Fast-forward four fortnights, and it turned out that there were too many outputs and too few evaluators so only two outputs had been evaluated more than twice (and 10 had been evaluated more than once). According to this metric, AI Safety Support and AI Safety Events did very well, leaving the other project in the dust by a wide margin – but those numbers were carried just by the scores of one or two evaluators so they’re most likely in large part due to the [Optimizer’s Curse](https://forum.effectivealtruism.org/topics/optimizer-s-curse). Hence we decided not to rely on this scoring for our evaluation and rather fall back on the credits for that. But the evaluations came with insightful comments that are still worth sharing. Next time we’ll use credits only and at most list some outputs to help evaluators who are not familiar with the work of the project to get an idea of what its most important contributions were. Wiki Credits Ranking -------------------- These are the normalized average credits that our evaluators have assigned to the projects. As mentioned above, these determine how richly donors to these projects get rewarded in terms of their donor scores, which then determine the project support: Project credits → donor scores → project support.   | | | | | --- | --- | --- | | **Rank** | **Project** | **Credits** | | 1 | FAR AI | 1768 | | 2 | AI Safety Events | 1457 | | 3 | Centre For Enabling EA Learning & Research | 842 | | 4 | AI Safety Support | 695 | | 5 | Center for the Study of Existential Risk | 607 | | 6 | Rational Animations | 601 | | 7 | Campaign for AI Safety | 579 | | 8 | AI X-risk Research Podcast | 566 | | 9 | Simon Institute for Longterm Governance | 490 | | 10 | Pour Demain | 481 | | 11 | Alignment Jam | 476 | | 12 | The Inside View | 466 | | 13 | EffiSciences | 415 | | 14 | Center for Reducing Suffering | 397 | | 15 | Modeling Cooperation | 233 | | 16 | QACI | 158 | | 17 | AI Objectives Institute | 151 | | 18 | Virtual AI Safety Unconference | 142 | | 19 | Alignment Plans | 131 | | 20 | AI Safety Ideas | 105 | | 21 | Global Catastrophic Risk Institute | 96 | Qualitative Results ------------------- ### AI Safety Events [AI Safety Unconference at NeurIPS 2022](https://aisafetyevents.org/events/aisuneurips2022/): One of the evaluators attended it and found it high value for networking, but (empirically) only for networking within the AI safety community, not for recruiting new people to the space. [ML Safety Social at NeurIPS 2022](https://aisafetyevents.org/events/mlsafetysocial2022/): One evaluator estimated, based on [this modeling effort](https://forum.effectivealtruism.org/s/6p75CyHT8tFWpoTjk/p/7kFPFYQSY7ZttoveS), that the social was about 300 times as impactful as the reference output (“AI Takeover Does Not Mean What You Think It Means”). The estimate was even higher for the safety unconference at the same conference. **Hence AI Safety Events had generally very high ratings. It is not listed among our top recommendations because we don’t have enough donation data on it. If you have supported AI Safety Events in the past,**[**please register your donations**](https://ai.givewiki.org/project/clhs18epg00002e6rsisq087o)**! You may well move a good chunk of the (now)**[**$700,000 that donors seek to allocate**](https://forum.effectivealtruism.org/posts/zxxew56gnYhYEupsc/regrant-up-to-usd600-000-with-givewiki)**!** ### The Inside View [AI Takeover Does Not Mean What You Think It Means](https://www.youtube.com/watch?v=K8SUBNPAJnE): This was our calibration output – it allowed me to understand how an evaluator is using the score and to scale their values up or down. The evaluators who commented on the video were generally happy with its production quality. Some were confused by the title (Paul’s models are probably well known among them) but found it sad that it had so few views. The main benefit over the blog post is probably to reach more people with it, which hasn’t succeeded to any great degree. Maybe we need an EA/AIS marketing agency? I’m also wondering whether it could’ve benefited from a call to action at the end. ### AI X-risk Research Podcast [Superalignment with Jan Leike](https://podcasts.google.com/feed/aHR0cHM6Ly9heHJwb2RjYXN0LmxpYnN5bi5jb20vcnNz/episode/MThlYjczZGItZmYxZS00MDU0LWJmOGYtZGRhNWM0ODkzNGM0?sa=X&ved=0CAUQkfYCahcKEwigu9z_mOeAAxUAAAAAHQAAAAAQCg): This interview was popular among evaluators, perhaps because they had largely already watched it. Some were cautious to score it too highly simply because it hadn’t reached enough people yet. But in terms of the content it was well regarded: “The episodes are high-quality in the sense that Daniel asks really good questions which make the podcast overall really informative. I think the particular one with Jan Leike is especially high-impact because Superalignment is such a big player, in some sense it’s the biggest alignment effort in the world.” (The episodes with Scott Aaronson and Vanessa Kosoy received lower impact scores but no comments.) ### AI Safety Ideas [The website](https://aisafetyideas.com/): “Seems potentially like a lot of value per connection.” The worries were that it might not be sufficiently widely known or used: “I think the idea is really cool, but I haven’t heard of anyone who worked on an idea which they found there.” And does it add much value at the current margin? “I couldn’t find a project on the site which was successful and couldn’t be attributed to the alignment jams. However, if there were some successful projects then it’s a decent impact. And I suspect there were at least some, otherwise Esben wouldn’t have worked on the site.” The evaluators didn’t have the time to disentangle whether people who participated in any Alignment Jams got some of their ideas from AI Safety Ideas or vice versa. All in all the impact scores were on par with the Jan Leike interview. ### Orthogonal [Formalizing the QACI alignment formal-goal](https://www.lesswrong.com/posts/MR5wJpE27ymE7M7iv/formalizing-the-qaci-alignment-formal-goal): This output scored highest on quality and impact (with impact scores in between the three AXRP interviews above) from among Orthogonal’s outputs. It got lower scores on the quality side because the evaluator found it very hard to read (but noting that it’s also just really hard to create a formal framework for outer alignment). But it scored more highly on the impact side. The evaluator thinks that it (the whole QACI idea) is very unlikely to work but highly impactful if it does. The other evaluated outputs were less notable. ### Center for Reducing Suffering [Documentary about Dystopian Futures | S-risks and Longtermism](https://www.youtube.com/watch?v=tPiq4njipdk): One evaluator gave a lower quality score to the documentary than to the reference output (“AI Takeover Does Not Mean What You Think It Means”) but noted that it “represents longtermism decently and gives an OK definition for s-risk.” They were confused, though, why it was published on a channel with seemingly largely unrelated content (since the context of the channel will color how people see s-risks) and concerned that talking about s-risks publicly can easily be net negative if done wrong. [Avoiding the Worst - Audiobook](https://centerforreducingsuffering.org/avoiding-the-worst-audiobook-available-now/): The audiobook got the highest impact rating among the CRS outputs even though an evaluator noted that they only counted what it added over the book – another way to access it – which isn’t much in comparison. ([The book itself](https://centerforreducingsuffering.org/wp-content/uploads/2022/10/Avoiding_The_Worst_final.pdf) was outside of our evaluation window, having been published in 2022.) ### FAR AI [Pretraining Language Models with Human Preferences](https://far.ai/publication/korbak2023pretraining/): One evaluator was excited about this paper in and of itself but worried that it might be a minor contribution on the margin compared to what labs like OpenAI, DeepMind, and Anthropic might’ve published anyway. They mention Constitutional AI as a similar research direction.  [Training Language Models with Language Feedback at Scale](https://far.ai/publication/scheurer2023training/): While this one scored slightly lower quantitatively, the qualitative review was the same.  [Improving Code Generation by Training with Natural Language Feedback](https://far.ai/publication/chen2023improving/): One evaluator was concerned about the converse in this case, that is that the paper might’ve contributed to capabilities and has hence had a negative impact. ### Centre For Enabling EA Learning & Research (EA Hotel) In general: “CEEALAR doesn’t have particularly impressive direct outputs, but I think the indirect outputs which are hard to measure are really good.” Or “the existence of CEEALAR makes me somewhat more productive in my everyday work, because it is kind of stress-reducing to know that there is a backup option for a place to live in case I don’t find a job.” ### AI Safety Support [AI Alignment Slack](https://ai-alignment.slack.com/join/shared_invite/zt-1wb0ev4a9-E_oAW3BLrJQx3I_tGvVpgA#/shared-invite/email): Invaluable for information distribution. One evaluator mentioned the numerous times that they found out about opportunities through this Slack. [Lots of Links page](https://www.aisafetysupport.org/lots-of-links): “The best collection of resources we currently have,” but with a big difference between the quality and the impact score: “It could be better organized and more up to date (even at a time when it was still maintained).” Epilogue ======== Want to [regrant some of the (now) $700,000](https://forum.effectivealtruism.org/posts/zxxew56gnYhYEupsc/regrant-up-to-usd600-000-with-givewiki) aggregate donation budget of our users? [Please register your donations](https://ai.givewiki.org/project/clhs18epg00002e6rsisq087o)! The GiveWiki depends on your data. You’re already a grantmaker or regrantor for a fund? Use the GiveWiki to accept and filter your applications. You will have more time to focus on the top applications, and the applicants won’t have to write yet another separate application. We’re always happy to [have a call](https://cal.com/goodx) or answer your questions in the comments or [by email](mailto:hi@givewiki.org).
64c74a75-4b4f-42d7-8486-35448e6b38eb
StampyAI/alignment-research-dataset/special_docs
Other
AI Risk Management Framework: Initial Draft Initial Draft i AI Risk Management Framework: Initial Draft March 17, 2022 This initial draft of the Artificial Intelligence Risk Management Framework (AI RMF, or Framework) builds on the concept paper released in December 2021 and incorporates the feedback received. The AI RMF is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems. AI research and deployment is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time. When AI RMF 1.0 is issued in January 2023, NIST, working with stakeholders, intends to have built out the remaining sections to reflect new knowledge, awareness, and practices. Part I of the AI RMF sets the stage for why the AI RMF is important and explains its intended use and audience. Part II includes the AI RMF Core and Profiles. Part III includes a companion Practice Guide to assist in adopting the AI RMF. That Practice Guide which will be released for comment includes a dditional examples and practices that can assist in using the AI RMF. The Guide will be part of a NIST AI Resource Center that is being established. NIST welcomes feedback on this initial draft and the related Practice Guide to inform further development of the AI RMF. Comments may be provided at a workshop on March 29 -31, 2022 , and also are strongly encouraged to be shared via email. N IST will produce a second draft for comment, as well as host a third workshop, before publishing AI RMF 1.0 in January 2023. Please send comments on this initial draft to AIframework@nist.gov by April 29, 2022 . Initial Draft ii Comments are especially requested on: 1. Whether the AI RMF appropriately covers and addresses AI risks, including with the right level of specificity for various use cases. 2. Whether the AI RMF is flexible enough to serve as a continuing resource considering evolving technology and standards landscape. 3. Whether the AI RMF enables decisions about how an organization can increase understanding of, communication about , and efforts to manage AI risks. 4. Whether the functions, categories, and subcategories are complete, appropriate, and clearly stated. 5. Whether the AI RMF is in a lignment with or leverag es other frameworks and standards such as those developed or being deve loped by IEEE or ISO/IEC SC42. 6. Whether the AI RMF is in a lignment with existing practices, and broader risk management practices. 7. What might be missing from the AI RMF . 8. Whether the soon to be published draft companion document citing AI risk management pr actices is useful as a complementary resource and what practices or standards should be added. 9. Others? Note: This first draft does not include Implementation Tiers as considered in the concept paper. Implementation Tiers may be added later if stakeholders consider them to be a helpful feature in the AI RMF. Comments are welcome. Initial Draft iii Table of Contents Part 1: Motivation 1 OVERVIEW 1 2 SCOPE 2 3 AUDIENCE 3 4 FRAMING RISK 5 4.1 Understanding Risk and Adverse Impacts 5 4.2 Challenges for AI Risk Management 6 5 AI RISKS AND TRUSTWORTHINESS 7 5.1 Technical Characteristics 8 5.1.1 Accuracy 9 5.1.2 Reliability 9 5.1.3 Robustness 10 5.1.4 Resilience or ML Security 10 5.2 Socio -Technical Characteristics 10 5.2.1 Explainability 11 5.2.2 Interpretability 11 5.2.3 Privacy 11 5.2.4 Safety 12 5.2.5 Managing Bias 12 5.3 Guiding Principles 12 5.3.1 Fairness 13 5.3.2 Accountability 13 5.3.3 Transparency 13 Part 2: Core and Profiles 6 AI RMF CORE 14 6.1 Map 15 6.2 Measure 16 6.3 Manage 17 6.4 Govern 18 7 AI RMF PROFILES 20 8 EFFECTIVENESS OF THE AI RMF 20 Part 3: Practical Guide 9 PRACTICE GUIDE 20 Initial Draft 1 AI Risk Management Framework: Initial Draft - 1 Part 1: Motivation 2 1 Overview 3 Remarkable surges in artificial intelligenc e (AI) capabilities have led to a wide range of 4 innovations with the potential to benefit nearly all aspects of our society and economy – 5 everything from commerce and healthcare to transportation and cybersecurity. AI systems are 6 used for tasks s uch as informing and advising people and taking actions where they can have 7 beneficial impact, such as safety and housing. 8 AI systems sometimes do not operate as intended because they are making inferences from 9 patterns observed in data rather than a true understanding of what causes those patterns. Ensuring 10 that these inferences are helpful and not harmful in particular use cases – especially when 11 inferences are rapidly scaled and amplified – is fundamental to trustworthy AI. While answers to 12 the question of what makes an AI technology trustworthy differ, there are certain key 13 characteristics which support trustworthiness, including accuracy, explainability and 14 interpretability, privacy, reliability, robustness, safety, security (resilience) and mitigation of 15 harmful bias. There also are key guiding principles to take into account such as accountability, 16 fairness, and equity. 17 Cultivating trust and communication about how to understand and manage the risks of AI 18 systems will help create opportunities for inno vation and realize the full potential of this 19 technology. 20 Many activities related to managing risk for AI 21 are common to managing risk for other types of 22 technology. An AI Risk Management Framework 23 (AI RMF, or Framework) can address challenges 24 unique to AI systems. This AI RMF is an initial 25 attempt to describe how the risks from AI -based 26 systems differ from other domains and to 27 encourage and equip many different stakeholders 28 in AI to address those risks purposefully. 29 This voluntary framework provides a flexible, 30 structured, and measurable process to address AI ris ks throughout the AI lifecycle, offering 31 guidance for the development and use of trustworthy and responsible AI. It is intended to 32 improve understanding of and to help organizations manage both enterprise and societal risks 33 related to the development, depl oyment, and use of AI systems. Adopting the AI RMF can assist 34 organizations, industries, and society to understand and determine their acceptable levels of risk. 35 It is important to note that the AI RMF is neither a checklist nor should be used in any way to certify an AI system. Likewise, using the AI RMF does not substitute fo r due diligence and judgment by organizations and individuals in deciding whether to design, develop, and deploy AI technologies – and if so, under what conditions. Initial Draft 2 In addition, it can be used to map compliance considerations beyond those addressed by this 1 framework, including existing regulations, laws, or other mandatory guidance. 2 Risks to any software or information -based system apply to AI; that includes important concerns 3 related to cybersecurity, privacy, safety, and infrastructure. This framework aims to fill the gaps 4 related specifically to AI. Rather than repeat information in other guidance, users of the AI RMF 5 are encouraged to address those non -AI specific issues via guidance already available. 6 Part 1 of this framework establishes the context for the 7 AI risk management process. Part 2 provides guidance 8 on outcomes and activities to c arry out that process to 9 maximize the benefits and minimize the risks of AI. 10 Part 3 [yet to be developed] assists in using the AI 11 RMF and offers sample practices to be considered in 12 carrying out this guidance, before, during, and after AI products, service s, and systems are 13 developed and deployed. 14 The Framework, and supporting resources, will be updated and improved based on evolving 15 technology and the standards landscape around the globe. In addition, as the AI RMF is put into 16 use, additional lessons will be learned that can inform future updates and additional resources. 17 NIST’s development of the AI RMF in collaboration with the private and public sectors is 18 consistent with its broader AI ef forts called for by the National AI Initiative Act of 2020 (P.L. 19 116-283), the National Security Commission on Artificial Intelligence recommendations, and the 20 Plan for Federal Engagement in AI Standards and Related Tools . Engagement with the broad AI 21 community during this Framework’s development also informs AI research and development 22 and evaluation by NIST and others. 23 2 Scope 24 The NIST AI RMF offers a process for managing risks related to AI systems across a wide 25 spectrum of types, applications, and maturity. This framework is organized and intended to be 26 understood and u sed by individuals and organizations, regardless of sector, size, or level of 27 familiarity with a specific type of technology. Ultimately, it will be offered in multiple formats, 28 including online versions, to provide maximum flexibility. 29 The AI RMF serves a s a part of a broader NIST resource center containing documents, 30 taxonomy, suggested toolkits, datasets, code, and other forms of technical guidance related to the 31 development and implementation of trustworthy AI. Resources will include a knowledge base of 32 terminology related to trustworthy and responsible AI and how those terms are used by different 33 stakeholders. 34 The AI RMF is not a checklist nor a compliance mechanism to be used in isolation. It should be 35 integrated within the organization developing and using AI and be incorporated into enterprise 36 For the purposes of the NIST AI RMF the term artificial intelligence refers to algorithmic processes that learn from data in an automated or semi - automated manner. Initial Draft 3 risk management; doing so ensures that AI will be treated along with other critical risks, yielding 1 a more integrated outcome and resulting in organizational efficiencies. 2 Attributes of the AI RMF 3 The AI RMF strives to: 4 1. Be risk -based, resource efficient, and voluntary. 5 2. Be consensus -driven and developed and regularly updated through an open, transparent process. 6 All stakeholders should have the opportunity to contribute to the AI RMF’s development. 7 3. Use clear and plain language that is understandable by a broad audience, including senior 8 executives, government officials, non -governmental organization lea dership, and those who are 9 not AI professionals – while still of sufficient technical depth to be useful to practitioners. The AI 10 RMF should allow for communication of AI risks across an organization, between organizations, 11 with customers, and to the publi c at large. 12 4. Provide common language and understanding to manage AI risks. The AI RMF should offer 13 taxonomy, terminology, definitions, metrics, and characterizations for AI risk. 14 5. Be easily usable and mesh with other aspects of risk management. Use of the Fr amework should 15 be intuitive and readily adaptable as part of an organization’s broader risk management strategy 16 and processes. It should be consistent or aligned with other approaches to managing AI risks. 17 6. Be useful to a wide range of perspectives, sectors , and technology domains. The AI RMF should 18 be both technology agnostic and applicable to context -specific use cases. 19 7. Be outcome -focused and non -prescriptive. The Framework should provide a catalog of outcomes 20 and approaches rather than prescribe one -size-fits-all requirements. 21 8. Take advantage of and foster greater awareness of existing standards, guidelines, best practices, 22 methodologies, and tools for managing AI risks – as well as illustrate the need for additional, 23 improved resources. 24 9. Be law - and regula tion-agnostic. The Framework should support organizations’ abilities to 25 operate under applicable domestic and international legal or regulatory regimes. 26 10. Be a living document. The AI RMF should be readily updated as technology, understanding, and 27 approaches to AI trustworthiness and uses of AI change and as stakeholders learn from 28 implementing AI risk management generally and this framework in particular. 29 3 Audience 30 AI risk management is a complex and relatively new area, and the list of individuals, groups, 31 communities, and organizations that can be affected by AI technologies is extensive. Identifying 32 and managing AI risks and impacts – both positive and adverse – requires a broad set of 33 perspectives and stakeholders. 34 Initial Draft 4 1 Figure 1: Key stakeholder groups associ ated with the AI RMF. 2 As Figure 1 illustrates, NIST has identified four stakeholder groups as intended audiences of this 3 Framework: AI system stakeholders, operators and evaluators, external stakeholders, and the 4 general public. Ideally, members of all sta keholder groups would be involved or represented in 5 the risk management process, including those individuals and community representatives that 6 may be affected by the use of AI technologies. 7 AI system stakeholders are those who have the most control and responsibility over the design, 8 development, deployment, and acquisition of AI systems, and the implementation of AI risk 9 management practices. This group comprises the primary adopters of the AI RMF. They may 10 include individ uals or teams within or among organizations with responsibilities to commission, 11 fund, procure, develop, or deploy an AI system: business teams, design and development teams, 12 internal risk management teams, and compliance teams. Small to medium -sized organ izations 13 face different challenges in implementing the AI RMF than large organizations. 14 Operators and evaluators provide monitoring and formal/informal test, evaluation, validation, 15 and verification (TEVV) of system performance, relative to both technical and socio -technical 16 requirements. These stakeholders, which include organizations which operate or employ AI 17 systems, use the output for decisions or to evaluate their performance. This group can include 18 users who interpret or incorporate the output of AI systems in settings with a high potential for 19 adverse impacts. They might include academic, public, and private sector researchers; 20 professional evaluators and auditors; system operators; and expert end users. 21 External stakeholders provide formal and/or q uasi-formal norms or guidance for specifying and 22 addressing AI risks. External to the primary adopters of the AI RMF, they can include trade 23 Initial Draft 5 groups, standards developing organizations, advocacy groups, and civil society organizations. 1 Their actions can des ignate boundaries for operation (technical or legal) and balance societal 2 values and priorities related to civil liberties and rights, the economy, and security. 3 The general public is most likely to directly experience positive and adverse impacts of AI 4 technologies. They may provide the motivation for actions taken by the other stakeholders and 5 can include individuals, communities, and consumers in the context where an AI system is 6 developed or deployed. 7 4 Framing Risk 8 AI systems hold the potential to advanc e our quality of life and lead to new services, support, 9 and efficiencies for people, organizations, markets, and society. Identifying, mitigating, and 10 minimizing risks and potential harms associated with AI technologies are essential steps towards 11 the acc eptance and widespread use of AI technologies. A risk management framework should 12 provide a structured, yet flexible, approach for managing enterprise and societal risk resulting 13 from the incorporation of AI systems into products, processes, organizations, systems, and 14 societies. Organizations managing an enterprise’s AI risk also should be mindful of larger 15 societal AI considerations and risks. If a risk management framework can help to effectively 16 address and manage AI risk and adverse impacts, it can lea d to more trustworthy AI systems. 17 4.1 Understanding Risk and Adverse Impacts 18 Risk is a measure of the extent to which an entity is negatively influenced by a potential 19 circumstance or event. Typically, risk is a function of 1) the adverse impacts that could arise if 20 the circumstance or event occurs; and 2) the likelihood of occurrence. Entities can be individuals, 21 groups, or communities as well as systems , processes, or organizations. 22 The impact of AI systems can be positive, negative, or both and can address, create, or result in 23 opportunities or threats. According to the International Organization for Standardization (Guide 24 73:2009; IEC/ISO 31010), certa in risks can be positive. While risk management processes 25 address adverse impacts, this framework intends to offer approaches to minimize anticipated 26 negative impacts of AI systems and identify opportunities to maximize positive impacts. 27 Additionally, this framework is designed to be responsive to new risks as they emerge rather than 28 enumerating all known risks in advance. This flexibility is particularly important where impacts 29 are not easily foreseeable, and applications are evolving rapidly. While AI ben efits and some AI 30 risks are well -known , the AI community is only beginning to understand and classify incidents 31 and scenarios that result in harm. Figure 2 provides examples of potential harms from AI 32 systems. 33 Risk management can also drive AI developers a nd users to understand and account for the 34 inherent uncertainties and inaccuracy of their models and systems, which in turn can increase the 35 Initial Draft 6 overall performance and trustworthiness of those models. Managing risk and adverse impacts 1 contributes to building trustworthy AI technologies and applications 2 3 Figure 2: Examples of potential harms from AI systems. 4 4.2 Challenges for AI Risk Management 5 4.2.1 Risk Measurement 6 AI risks and impacts that are not well -defined or adequately understood are difficult to measure 7 quantitatively or qualitatively. The presence of third -party data or systems may also complicate 8 risk measurement. Those attempting to measure the adverse impact on a population may not be 9 aware that certain demographics may experience harm differe ntly than others. 10 AI risks can have a temporal dimension. Measuring risk at an earlier stage in the AI lifecycle 11 may yield different results than measuring risk at a later stage. Some AI risks may have a low 12 probability in the short term but have a high l ikelihood for adverse impacts. Other risks may be 13 latent at present but may increase in the long term as AI systems evolve. 14 Furthermore, inscrutable AI systems can complicate the measurement of risk. Inscrutability can 15 be a result of the opaque nature of AI technologies (lack of explainability or interpretability), 16 lack of transparency or documentation in AI system development or deployment, or inherent 17 uncertainties in AI systems. 18 4.2.2 Risk Thresholds 19 Thresholds refer to the values used to establish concrete decision points and operational limits 20 that trigger a response, action, or escalation. AI risk thresholds (sometimes referred to as Key 21 Risk Indicators) can involve both technical factors (such as error rates for determining bias) and 22 human values (such as social or legal norms for appropriate levels of transparency). These 23 Initial Draft 7 factors and values can establish levels of risk (e.g., low, medium, or high) based on broad 1 categories of adverse impacts or harms. 2 Thresholds and values can also determine where AI syst ems present unacceptable risks to certain 3 organizations, systems, social domains, or demographics. In these cases, the question is not how 4 to better manage risk of AI, but whether an AI system should be designed, developed, or 5 deployed at all. 6 The AI RMF d oes not prescribe risk thresholds or values. Risk tolerance – the level of risk or 7 degree of uncertainty that is acceptable to organizations or society – is context and use case - 8 specific. Therefore, risk thresholds should be set through policies and norms that can be 9 established by AI system owners, organizations, industries, communities, or regulators (who 10 often are acting on behalf of individuals or societies). Risk thresholds and values are likely to 11 change and adapt over time as policies and norms chang e or evolve. In addition, different 12 organizations may have different risk thresholds (or tolerance) due to varying organizational 13 priorities and resource considerations. Even within a single organization there can be a balancing 14 of priorities and tradeoffs between technical factors and human values. Emerging knowledge and 15 methods for better informing these decisions are being developed and debated by business, 16 governments, academia, and civil society. To the extent that challenges for specifying risk 17 thresh olds or determining values remain unresolved, there may be contexts where a risk 18 management framework is not yet readily applicable for mitigating AI risks and adverse impacts. 19 The AI RMF provides the opportunity for organizations to specifically define th eir risk 20 thresholds and then to manage those risks within their tolerances. 21 4.2.3 Organizational Integration 22 The AI RMF is not a checklist nor a compliance mechanism to be used in isolation. It should be 23 integrated within the organization developing and using AI technologies and be incorporated into 24 enterprise risk management; doing so ensures that AI will be treated along with other critical 25 risks, yielding a more integrated outcome and resulting in organizational efficiencies. 26 Organizations need to establish an d maintain the appropriate accountability mechanisms, roles 27 and responsibilities, culture, and incentive structures for risk management to be effective. Use of 28 the AI RMF alone will not lead to these changes or provide the appropriate incentives. Effective 29 risk management needs organizational commitment at senior levels and may require significant 30 cultural change for an organization or industry. 31 Small to medium -sized organizations face different challenges in implementing the AI RMF than 32 large organizations. 33 5 AI Risks and Trustworthiness 34 The AI RMF uses a three -class taxonomy, depicted in Figure 3, to classify characteristics that 35 should be considered in comprehensive approaches for identifying and managing risk related to 36 AI systems: technical characteristics, socio -technical characterist ics, and guiding principles . 37 Initial Draft 8 This AI RMF taxonomy frames AI risk using characteristics that are aligned with trustworthy AI 1 systems, in conjunction with contextual norms and values. Since AI trustworthiness and risk are 2 inversely related, approaches which enhance trustworthiness can contribute to a reduction or 3 attenuation of related risks. The AI RMF taxonomy articulates several key building blocks of 4 trustworthy AI within each category, which are particularly suited to the examination of potential 5 risk. 6 7 Figure 3: AI Risks and Trustworthiness. The three -class taxonomy to classify characteristics that should 8 be considered in comprehensive approaches for identifying and managing risk related to AI systems. The 9 taxonomy articulates several key building blo cks of trustworthy AI within each category, which are 10 particularly suited to the examination of potential risk. 11 Figure 4 provides a mapping of the AI RMF taxonomy to the terminology used by the 12 Organisation for Economic Co -operation and Development (OECD) in their Recommendation 13 on AI , the European Union (EU) Artificial Intelligence Act , and U nited States Executive Order 14 (EO) 13960. 15 5.1 Technical Characteristics 16 Technical characteristics in the AI RMF taxonomy refer to factors that are under the direct 17 control of AI system designers and developers, and which may be measured using standard 18 evaluation criteria. Technical characteristics include the tradeoff between convergent - 19 discriminant validity (whether the data reflects what the user intends to measure and not other 20 things) and statistical reliability (whether the data may be subject to high levels of statistical 21 noise and measurement bias). Validity of AI, especially machine learning (ML) models, can be 22 assessed using technical characteristics. Validity for de ployed AI systems is often assessed with 23 ongoing audits or monitoring that confirm that a system behaves as intended. It may be possible 24 to utilize and automate explicit measures based on variations of standard statistical or ML 25 techniques and specify thre sholds in requirements. Data generated from experiments that are 26 designed to evaluate system performance also fall into this category and might include tests of 27 causal hypotheses and assessments of robustness to adversarial attack. 28 Initial Draft 9 1 Figure 4: Mapping of AI RMF taxonomy to AI policy documents. 2 The following technical characteristics lend themselves well to addressing AI risk: accuracy, 3 reliability, robustness, and resilience (or ML security). 4 5.1.1 Accuracy 5 Accuracy indicat es the degree to which the ML model is correctly capturing a relationship that 6 exists within training data. Analogous to statistical conclusion validity, accuracy is examined via 7 standard ML metrics (e.g., false positive and false negative rates, F1 -score, precision, and recall) , 8 as well as assessment of model underfit or overfit (high testi ng errors irrespective of error rates 9 in training). It is widely acknowledged that current ML methods cannot guarantee that the 10 underlying model is capturing a causal relationship. Establishing internal (causal) validity in ML 11 models is an active area of research. AI risk management processes should take into account the 12 potential risks to the enterprise and society if the underlying causal relationship inferred by a 13 model is not valid, calling i nto question decisions made on the basis of the model. Determining a 14 threshold for accuracy that corresponds with acceptable risk is fundamental to AI risk 15 management and highly context -dependent . 16 5.1.2 Reliability 17 Reliability indicates whether a model consisten tly generates the same results, within the bounds 18 of acceptable statistical error. Techniques designed to mitigate overfitting (e.g., regularization) 19 and to adequately conduct model selection in the face of the bias/variance tradeoff can increase 20 model rel iability. The definition of reliability is analogous to construct reliability in the social 21 sciences, albeit without explicit reference to a theoretical construct. Reliability measures may 22 give insight into the risks related to decontextualization, due to the common practice of reusing 23 Initial Draft 10 ML datasets or models in ways that cause them to become disconnected from the social contexts 1 and time periods of their creation. As with accuracy, reliability provides an evaluation of the 2 validity of models, and thus can be a factor in determining thresholds for acceptable risk. 3 5.1.3 Robustness 4 Robustness is a measure of model sensitivity, indicating whether the model has minimum 5 sensitivity to variations in uncontrollable factors. A robust model will continue to function 6 despite the existence of faults in its components. The performance of the model may be 7 diminished or otherwise altered until the faults are corrected. Measures of robustness might 8 range from sensitivity of a model’s outputs to small changes in its inputs, but mig ht also include 9 error measurements on novel datasets. Robustness contributes to sensitivity analysis in the AI 10 risk management process. 11 5.1.4 Resilience or ML Security 12 A model that can withstand adversarial attacks, or more generally, unexpected changes in its 13 environment or use, may be said to be resilient or secure. This attribute has some relationship to 14 robustness except that it goes beyond the provenance of the data to encompass unexpected or 15 adversarial use of the model or data. Other common ML security con cerns relate to the 16 exfiltration of models, training data, or other intellectual property through AI system endpoints. 17 5.2 Socio -Technical Characteristics 18 Socio -technical characteristics in the AI RMF taxonomy refer to how AI systems are used and 19 perceived in individual, group, and societal contexts. This includes mental representations of 20 models, whether the output provided is sufficient to evaluate compliance (transparency), whether 21 model operations can be easily understood (explainability), whet her they provide output that can 22 be used to make a meaningful decision (interpretability), and whether the outputs are aligned 23 with societal values. Socio -technical factors are inextricably tied to human social and 24 organizational behavior, from the dataset s used by ML processes and the decisions made by 25 those who build them, to the interactions with the humans who provide the insight and oversight 26 to make such systems actionable. 27 Unlike technical characteristics, socio -technical characteristics require sig nificant human input 28 and cannot yet be measured through an automated process. Human judgment must be employed 29 when deciding on the specific metrics and the precise threshold values for these metrics. The 30 connection between human perceptions and interpretat ions, societal values, and enterprise and 31 societal risk is a key component of the kinds of cultural and organizational factors that will be 32 necessary to properly manage AI risks. Indeed, input from a broad and diverse set of 33 stakeholders is required throug hout the AI lifecycle to ensure that risks arising in social contexts 34 are managed appropriately. 35 The following socio -technical characteristics have implications for addressing AI risk: 36 explainability, interpretability, privacy, safety, and managing bias. 37 Initial Draft 11 5.2.1 Explainability 1 Explainability seeks to provide a programmatic, sometimes causal, description of h ow model 2 predictions are generated. Even given all the information required to make a model fully 3 transparent, a human must apply technical expertise if they want to understand how the model 4 works. Explainability refers to the user’s perception of how the model works – such as what 5 output may be expected for a given input. Explanation techniques tend to summarize or visualize 6 model behavior or predictions for technical audiences. Explanations can be useful in promoting 7 human learning from machine learning, for addressing transparency requirements, or for 8 debugging issues with AI systems and training data. However, risks due to explainability may 9 arise for many reasons, including, for example, a lack of fidelity or consistency in explanation 10 methodologies, or if humans incorrectly infer a model’s operation, or the model is not operating 11 as expected. Risk from lack of explainability may be managed by descriptions of how models 12 work to users’ skill levels. Explainable systems can be more easily debugged and moni tored, and 13 lend themselves to more thorough documentation, audit, and governance. 14 Explainability is related to transparency. Typically the more opaque a model is, the less it is 15 considered explainable. However, transparency does not guarantee explainabilit y, especially if 16 the user lacks an understanding of ML technical principles. 17 5.2.2 Interpretability 18 Interpretability seeks to fill a meaning deficit. Although explainability and interpretability are 19 often used interchangeably, explainability refers to a represen tation of the mechanisms 20 underlying an algorithm’s operation, whereas interpretability refers to the meaning of its output 21 in the context of its designed functional purpose. The underlying assumption is that perceptions 22 of risk stem from a lack of ability to make sense of, or contextualize, model output appropriately. 23 Model interpretability refers to the extent to which a user can determine adherence to this 24 function and the consequent implications of this output upon other consequential decisions for 25 that user. Interpretations are typically contextualized in terms of values and reflect simple, 26 categorical distinctions. For example, a society may value privacy and safety, but individuals 27 may have different determinations of safety thresholds. Risks to interp retability can often be 28 addressed by communicating the interpretation intended by model designers, although this 29 remains an open area of research. The prevalence of different interpretations can be readily 30 measured with psychometric instruments. 31 5.2.3 Privacy 32 Privacy refers generally to the norms and practices that help to safeguard values such as human 33 autonomy and dignity. These norms and practices typically address freedom from intrusion, 34 limiting observation, or individuals’ control of facets of their identit ies (e.g., body, data, 35 reputation). Like safety and security, specific technical features of an AI system may promote 36 privacy, and assessors can identify how the processing of data could create privacy -related 37 problems. However, determinations of likelihoo d and severity of impact of these problems are 38 contextual and vary among cultures and individuals. 39 Initial Draft 12 5.2.4 Safety 1 Safety as a concept is highly correlated with risk and generally denotes an absence (or 2 minimization) of failures or conditions that render a system dangerous. As AI systems interact 3 with humans more directly in factories and on the roads, for example, the safety of these systems 4 is a serious consideration for AI risk management. Safety is often – though not always – 5 considered through a legal lens. Pr actical approaches for AI safety often relate to rigorous 6 simulation and in -domain testing, real -time monitoring, and the ability to quickly shut down or 7 modify misbehaving systems. 8 5.2.5 Managing Bias 9 NIST has identified three major categories of bias in AI: s ystemic, computational, and human. 10 Managing bias in AI systems requires an approach that considers all three categories. 11 Bias exists in many forms, is omnipresent in society, and can become ingrained in the automated 12 systems that help make decisions about our lives. While bias is not always a negative 13 phenomenon, certain biases exhibited in AI models and systems can perpetuate and amplify 14 negative impacts on individuals, organizations, and society, and at a speed and scale far beyond 15 the traditional discrim inatory practices that can result from implicit human or systemic biases. 16 Bias is tightly associated with the concepts of transparency and fairness in society. See NIST 17 publication “Towards a Standard for Identifying and Managing Bias in Artificial Intelli gence.” 18 When managing risks in AI systems it is important to understand that the attributes 19 of the AI RMF risk taxonomy are interrelated. Highly secure but unfair systems, 20 accurate but opaque and uninterpretable systems, and inaccurate, but fair, secure, 21 privacy -protected, and transparent systems are all undesirable. It is possible for 22 trustworthy AI systems to achieve a high degree of risk control while retaining a 23 high level of performance quality. Achieving this difficult goal requires a 24 comprehensive ap proach to risk management, with tradeoffs among the technical 25 and socio -technical characteristics. 26 5.3 Guiding Principles 27 Guiding principles in the AI RMF taxonomy refer to broader societal norms and values that 28 indicate societal priorities. While there is no objective standard for ethical values, as they are 29 grounded in the norms and legal expectations of specific societies or cultures, it is widely agreed 30 that AI technologies should be developed and deployed in ways that meet contextual norms and 31 ethical val ues. When specified as policy, guiding principles can enable AI stakeholders to form 32 actionable, low -level requirements. Some requirements will be translated into quantitative 33 measures of performance and effectiveness, while some may remain qualitative in nature. 34 Guiding principles that are relevant for AI risk include fairness, accountability, and transparency. 35 Fairness in AI systems includes concerns for equality and equity by addressing socio -technical 36 issues such as bias and discrimination. Individual human operators and their organizations 37 should be answerable and held accountable for the outcomes of AI systems, particularly adverse 38 Initial Draft 13 impacts stemming from risks. Absent transparency, users are left to guess about these factors and 1 may make unwarranted an d unreliable assumptions regarding model provenance. Transparency 2 is often necessary for actionable redress related to incorrect and adverse AI system outputs. 3 5.3.1 Fairness 4 Standards of fairness can be complex and difficult to define because perceptions of fairness 5 differ among cultures. For one type of fairness, process fairness, AI developers assume that ML 6 algorithms are inherently fair because the same procedure applies r egardless of user. However, 7 this perception has eroded recently as awareness of biased algorithms and biased datasets has 8 increased. Fairness is increasingly related to the existence of a harmful system, i.e., even if 9 demographic parity and other fairness measures are satisfied, sometimes the harm of a system is 10 in its existence. While there are many technical definitions for fairness, determinations of 11 fairness are not generally just a technical exercise. Absence of harmful bias is a necessary 12 condition fo r fairness. 13 5.3.2 Accountability 14 Determinations of accountability in the AI context are related to expectations for the responsible 15 party in the event that a risky outcome is realized. Individual human operators and their 16 organizations should be answerable and held accountable for the outcomes of AI systems, 17 particularly adverse impacts stemming from risks. The relationship between risk and 18 accountability associated with AI and technological systems more broadly differs across cultural, 19 legal, sectoral, and soci etal contexts. Grounding organizational practices and governing 20 structures for harm reduction, like risk management, can help lead to more accountable systems. 21 5.3.3 Transparency 22 Transparency seeks to remedy a common information imbalance between AI system operators 23 and AI system consumers. Transparency reflects the extent to which information is available to a 24 user when interacting with an AI system. Its scope spans from design decisions and training data 25 to model training, the structure of the model, its intended use case, how and when deployment 26 decisions were made and by whom, etc. Absent transparency, users are left to guess about these 27 factors and may make unwarranted and unreliable assumptions regarding model provenance. 28 Transparency is often necessary for actionable redress related to incorrect and adverse AI system 29 outputs. A transparent system is not necessarily a fair, priva cy-protective , secure, or robust 30 system. However, it is difficult to determine whether an opaque s ystem possesses such 31 desiderata, and to do so over time as complex systems evolve. 32 33 Initial Draft 14 Part 2: Core and Profiles 1 6 AI RMF Core 2 The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and 3 activities to manage AI risks. The Core is composed of three elements: functions, categories, and 4 subcategories. As illustrated in Figure 5, functions organize AI risk management activities at 5 their highest level to map, measure, manage, and govern AI risks. Within each function, 6 categories and subcategories subdivide the function into specific outcomes and actions . 7 8 Figure 5: Functions organize AI risk management activities at their highest level to map, measure, 9 manage, and govern AI risks. Governance is a cross -cutting function that is infused throughout and 10 informs the other functions of the process. 11 Govern is a cross -cutting function that is infused throughout and informs the other functions of 12 the process. Aspects of Govern, especially those related to compliance or evaluation, shoul d be 13 integrated into each of the other functions. Assuming a governance structure is in place, 14 functions may be performed in any order across the AI lifecycle as deemed to add value by a 15 user of the framework. In most cases, it will be more useful and effe ctive to begin with Map 16 before Measure and Manage. Regardless, the process should be iterative, with cross -referencing 17 between functions as necessary. Similarly, there are categories and subcategories with elements 18 that apply to multiple functions. 19 Technic al and socio -technical characteristics and guiding principles of AI trustworthiness are 20 essential considerations for each function . AI RMF core functions should be carried out in a way 21 that reflects diverse and multidisciplinary perspectives, potentially i ncluding the views of 22 stakeholders from outside the organization. Risk management should be performed throughout 23 the AI system life cycle (Figure 6) to ensure it is continuous and timely. 24 Initial Draft 15 On the following pages, Tables 1 through 4 provide the Framework Cor e listing. 1 2 Figure 6: Risk management should be performed throughout the AI system life cycle to ensure it is 3 continuous and timely. Example activities for each stage of the AI lifecycle follow. Pre-Design : data 4 collection, curation or selection, problem formulation, and identification of stakeholders . Design & 5 Development : data analysis, data cleaning, model training, and requirement analysis. Test & Evaluation : 6 technical validation and verification. Deployment : user feedback and override, post deployment 7 monitoring, and decommissioning. 8 6.1 Map 9 The Map function establishes the context and applies the attributes of the AI RMF taxonomy 10 (Figure 3) to frame risks related to an AI system. The information gathered while carrying out 11 this function informs decisions about model management, including an initial decision about 12 appropriateness or the need for an AI solution. Determination of whether AI use is appropriate or 13 warranted can be considered in comparison to the status quo per a qualitative or more formal 14 quantitative analysis of benefits, costs, and risks. 15 A com panion document describes practices related to mapping AI risks. Table 1 lists the Map 16 function’s categories and subcategories. 17 Table 1: Example of categories and subcategories for Map function 18 ID Category Subcategory Map: Context is recognized and risks related to the context are identified 1 Context is established and understood. Intended purpose, setting in which the AI system will be deployed, the specific set of users along with their expectations, and impacts of system use are understood and documented as appropriate. The business purpose or context of use ha s been clearly defined or – in the case of assessing existing AI systems – re-evaluated . The organization’s mission and relevant goals for the AI technology are understood. Stakeholders are defined, a plan for continuous engagement/communication is developed, and outreach is conducted. Initial Draft 16 System requirements are elicited and understood from relevant stakeholders (e.g., “the system shall respect the privacy of its users”). Design decisions take socio -technical implications into account for addressing AI risks. Risk tolerances are determined. 2 Classification of AI system is performed. The specific task that the AI system will support is defined (e.g., recommendation, classification, etc.). Considerations related to data collection and selection are identified. (e.g., availability, representativeness, suitability). Detailed information is provided about the operational context in which the AI system will be deployed (e.g., human -machine teaming, etc.) and how output will be utilized. 3 AI capabilities, targeted usage, goals, and expected benefits and costs over status quo are understood. Benefits of intended system behavior are examined. Cost (monetary or otherwise) of errors or unintended system behavior is examined. Targeted application scope is specified and narrowed to the extent possible based on established context and AI system classification. 4 Risks and harms to individual, organizational, and societal perspectives are identified. Potential business and societal (positive or adverse) impacts of technical and socio -technical characteristics for potential users, the organizations, or society as a whole are understood. Potential harms of the AI system are elucidated along technical and socio -technical characteristics and aligned with guiding principles. Likelihood of each harm is understood based on expected use, past uses of AI systems in similar contexts, public incident reports or other data. Benefits of the AI system outweigh the risks, and risks can be assessed and managed. Ideally, this evaluation should be conducted by an independent third party or by experts who did not serve as front -line developers for the system, and who consults experts, stakeholders, and impacted communities. 6.2 Measure 1 The Measure function provides knowledge relevant to the risks associated with attributes of the 2 AI RMF taxonomy in Section 5. This includes analysis, quantitative or qualitative assessment, 3 and tracking the risk and its impact. Risk analysis and measurement may involve a detailed 4 consideration of uncertainties, tradeoffs, consequences, likelihood, events, controls, and their 5 effectiveness. An event can have multiple causes and consequences and can affect multiple 6 objectives. 7 Methods a nd metrics for quantitative or qualitative measurement rapidly evolve. Both qualitative 8 and quantitative methods should be used to track risks. 9 A companion document describes practices related to measuring AI risks. Table 2 lists the 10 Measure Function’s ca tegories and subcategories. 11 Initial Draft 17 Table 2: Example of categories and subcategories for Measure function 1 ID Category Subcategory Measure: Identified risks are assessed, analyzed, or tracked 1 Appropriate methods and metrics are identified and applied. Elicited system requirements are analyzed . Approaches and metrics for quantitative or qualitative measurement of the enumerated risks, including technical measures of performance for specific inferences, are identified and selected for implementation. The appropriateness of metrics and effectiveness of existing controls is regularly assessed and updated. 2 Systems are evaluated. Accuracy, reliability, robustness, resilience (or ML security), explainability and interpretability, privacy, safety, bias, and other system performance or assurance criteria are measured, qualitatively or quantitatively. Mechanisms for tracking identified risks over time are in place, particularly if potential risks are difficult to assess using currently available measurement techniques, or are not yet available. 3 Feedback from a ppropriate experts and stakeholders is gathered and assessed. Subject matter experts assist in measuring and validating whether the system is performing consistently with their intended use and as expected in the specific deployment setting. Measurable performance improvements (e.g., participatory methods) based on consultations are identified. 6.3 Manage 2 This function addresses risks which have been mapped and measured and are managed in order 3 to maximize benefits and minimize adverse impacts. These are risks associated with the 4 attributes of the AI RMF taxonomy (Section 5). Decisions about this function take into account 5 the context and the actual and perceived consequences to external and intern al stakeholders. That 6 includes interactions of the AI system with the status quo world and potential benefits or costs. 7 Management can take the form of deploying the system as is if the risks are deemed tolerable; 8 deploying the system in production environ ments subject to increased testing or other controls; 9 or decommissioning the system entirely if the risks are deemed too significant and cannot be 10 sufficiently addressed. Like other risk management efforts, AI risk management must be 11 ongoing. 12 Practices rel ated to AI risk management are discussed in the companion document. Table 3 lists 13 the Manage function’s categories and subcategories. 14 Table 3: Example categories and subcategories for Manage function 15 ID Category Subcategory Manage: Risks are prioritized and acted upon based on a projected impact 1 Assessments of potential harms and results of analyses conducted via the map and measure functions are used to respond to and manage AI risks. Assessment of whether the AI is the right tool to solve the given problem (e.g., if the system should be further developed or deployed). Identified risks are prioritized based on their impact, likelihood, resources required to address them, and available methods to address them. Initial Draft 18 Responses to enumerated risks are identified and planned. Responses can include mitigating, transferring or sharing, avoiding, or accepting AI risks. 2 Priority actions to maximize benefits and minimize harm are planned , prepared, implemented, and communicated to internal and external stakeholders as appropriate (or required) and to the extent practicable. Resources required to manage risks are taken into account , along with viable alternative systems, approaches, or methods, and related reduction in severity of impact or likelihood of each potential action. Plans are in place, both performance and control -related, to sustain the value of the AI system once deployed. Mechanisms are in place and maintained to supersede, disengage, or deactivate existing applications of AI that demonstrate performance or outcomes that are inconsistent with their intended use. 3 Responses to enumerated and measured risks are documented and monitored over time. Plans related to post depl oyment monitoring of the systems are implemented, including mechanisms for user feedback, appeal and override, decommissioning , incident response, and change management. Measurabl e performance improvements (e.g., participatory methods) based on consultations are integrated into system updates. 6.4 Govern 1 The Govern function cultivates and implements a culture of risk management within 2 organizations devel oping, deploying, or acquiring AI systems. Governance is designed to ensure 3 risks and potential imp acts are identified, measured, and managed effectively and consistently. 4 Governance processes focused on potential impacts of AI technologies are the backbon e of risk 5 management. Governance focuses on technical aspects of AI system design and development as 6 well as on organizational practices and competencies that directly impact the individuals 7 involved in training, deploying, and monitoring such systems. Gov ernance should address supply 8 chains, including third -party software or hardware systems and data as well internally developed 9 AI systems. 10 Governance is a function that has relevance across all other functions, reflecting the importance 11 of infusing governa nce considerations throughout risk management processes and procedures. 12 Attention to g overnance is a continual and intrinsic requirement for effective AI risk 13 management over an AI system ’s entire lifespan . For example, compliance with internal and 14 externa l policies or regulations is a universal aspect of the governance function in risk 15 management. Similarly, governance provides a structure through which AI risk management 16 functions can align with organizational policies and strategic priorities, including those not 17 directly related to AI systems. 18 A companion document describes p ractices related to governance of AI risk management. Table 19 4 lists Govern function’s categories and subcategories. 20 21 Initial Draft 19 Table 4: Example categories and subcategories for Govern functio n 1 ID Category Subcategory Govern: A culture of risk management is cultivated and present 1 Policies, process es, procedures and practices across the organization related to the development, testing, deployment, use and auditing of AI systems are in place, transparent, and implemented effectively. The risk management process and its outcomes are documented and traceable through transparent mechanisms , as appropriate and to the extent practicable. Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, with responsibilities clearly defined. Methods for ensuring all dimensions of trustworthy AI are embedded into policies, processes, and procedures. 2 Accountability structures are in place to ensure that the appropriate teams and individuals are empowered, responsible, and trained for managing the risks of AI systems. Roles and responsibilities and lines of communication related to identifying and addressing AI risks are clear to individuals and teams throughout the organization. The organization’s personnel and partners are provided AI risk management awareness education and training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements. Executive leadership of the organization considers decisions about AI system development and deployment ultimately to be their responsibility. 3 Workforce diversity, equity and inclusion processes are prioritized. Decision making throughout the AI lifecyc le is informed by a demographically and disciplinarily diverse team, including internal and external personnel. Specifically, teams that are directly engaged with identifying design considerations and risks include a diversity of experience, expertise and backgrounds to ensure AI systems meet requirements beyond a narrow subset of users. 4 Teams are committed to a culture that considers and communicates risk. Teams are encouraged to consider and document the impacts of the technology they design and to dev elop and communicate about these impacts more broadly. Organizational practices are in place to ensure that teams actively challenge and question steps in the design and development of AI systems to minimize harmful impacts. 5 Processes are in place to ensure that d iversity, equity, inclusion, accessibility, and cultural considerations from potentially impacted individuals and communities are fully taken into account. Organizational policies and practices are in place that prioritize the consideration and adjudication of external stakeholder feedback regarding the potential individual and societal harms posed by AI system deployment. Processes are in place to empower teams to make decisions about if and how to develop and deploy AI systems based on these considerations, and define periodic reviews of impacts, including potential harm. 6 Clear policies and procedures are in place to address AI risks arising from supply chain issues, including third -party software and data. Policies and procedures include guidelines for ensuring supply chain and partner involvement and expectations regarding the value and trustworthiness of third -party data or AI systems. Contingency processes are in place to address potential issues with third -party data or AI systems. 2 Initial Draft 20 7 AI RMF Profiles 1 Profiles are instantiations of the AI RMF Core for managing AI risks for context -specific use 2 cases. Using the AI RMF, profiles illustrate how risk can be managed at various stages of the AI 3 lifecycle or in sector, technology, or end -use applications. Profiles may state an “as is” and 4 “target” state of how an organization addresses AI risk management. 5 NOTE: Development of profiles is deferred until later drafts of the AI RMF are developed with 6 the community. NIST welcomes contributions of AI RMF profiles. These profiles will inform 7 NIST and the broader community about the usefulness of the AI RMF and likely lead to 8 improvements which can be incorporated into future versions of the framework. 9 8 Effectiveness of the AI RMF 10 The goal of the AI RMF is to offer a resource for improving the ability of organizations to 11 manage AI risks in order to maximize benefits and to minimize AI -related harms. Organizations 12 are encouraged to periodically evaluate whet her the AI RMF has improved their ability to 13 manage AI risks, including but not limited to their policies, processes, practices, implementation 14 plans, indicators, and expected outcomes. 15 NOTE: NIST is deferring development of this section until later draft s of the AI RMF are 16 developed with the community. 17 Part 3: Practice Guide 18 9 Practice Guide 19 NOTE: NIST is developing a companion Practice Guide which will include additional 20 examples and practices that can assist in using the AI RMF. That Guide, which will res ide 21 online only and will be updated regularly with contributions expected to come from many 22 stakeholders, will be part of the NIST AI Resource Center that is being established. 23
ee8949f9-f01e-47b2-a167-b3894741776c
trentmkelly/LessWrong-43k
LessWrong
Covid 12/24: We’re F***ed, It’s Over UPDATE 7/21/2021: As you doubtless know at this point, it was not over. Given the visibility of this post, I'm going to note here at the top that the prediction of a potential large wave of infections between March and May did not happen, no matter what ultimately happens with Delta (and the prediction was not made with Delta in mind anyway, only Alpha). Some more reflections on that at the bottom of this post here. A year ago, there were reports coming out of China about a new coronavirus. Various people were saying things about exponential growth and the inevitability of a new pandemic, and urging action be taken. The media told us it was nothing to worry about, right up until hospitals got overwhelmed and enough people started dying. This past week, it likely happened again.  A new strain of Covid-19 has emerged from southern England, along with a similar one in South Africa. The new strain has rapidly taken over the region, and all signs point to it being about 65% more infectious than the old one, albeit with large uncertainty and error bars around that.  I give it a 70% chance that these reports are largely correct. There is no plausible way that a Western country can sustain restrictions that can overcome that via anything other than widespread immunity. This would be the level required to previously cut new infections in half every week. And all that would do is stabilize the rate of new infections. Like last time, the media is mostly assuring us that there is nothing to worry about, and not extrapolating exponential growth into the future. Like last time, there are attempts to slow down travel, that are both not tight enough to plausibly work even if they were implemented soon enough, and also clearly not implemented soon enough. Like last time, no one is responding with a rush to get us prepared for what is about to happen. There are no additional pushes to improve our ability to test, or our supplies of equipment, or to speed our vaccine efforts o
0b6c8551-d5b5-4fee-9950-497079d5363b
trentmkelly/LessWrong-43k
LessWrong
"Immortal But Damned to Hell on Earth" http://www.theatlantic.com/technology/archive/2015/05/immortal-but-damned-to-hell-on-earth/394160/   With such long periods of time in play (if we succeed), the improbable hellish scenarios which might befall us become increasingly probable. With the probability of death never quite reaching 0, despite advanced science, death might yet be inevitable. But the same applies also to a hellish life in the meanwhile. And the longer the life, the more likely the survivors will envy the dead. Is there any safety in this universe? What's the best we can do?
abf85426-97ef-42ac-8496-2b46e52b64c9
trentmkelly/LessWrong-43k
LessWrong
Mitigate the Loss; Fight for the Living This post is an addendum to A Different Prisoner's Dilemma. It is another scenario with which to demonstrate the point. Imagine that you are a person in a horrible and traumatic situation. The love of your life, the parent of your children, has been burned to death, deliberately. Capital punishment, imposed by a culture that believes some sins are so horrible that they demand this gruesome torture to balance the scales. The whole future you dreamed of having together has gone up in flames, and you are sick with grief and loss and pain. Some people would be tempted to fling themselves into the flames as well. You probably do think of it, at least once, as you stand there, watching the pyre blaze. Some people will even think that this is the morally right thing to do, and will judge you for not committing suicide. They will think that you didn't really love the one you have lost, if you are able to stay standing and move on without them. If it is your aim to reduce suffering and death in the world, however, then this impulse and any social pressure behind it is your enemy. If you leap onto the pyre to be burned as well, or hang yourself from a beam at home, or stop taking meals and waste away into nothingness, regardless of the method of your self-destruction, what will it achieve? Your friends will be hurt. Your community will be hurt. Anyone who depended on your skills and experience and work ethic will be hurt. You will be destroyed, and your children will now have lost both of their parents. It will not bring your lover back. Furthermore, you will have validated the idea that your suicide was moral. You will have set an example of surrendering to grief instead of coping with it. Others may be tempted, they might believe it was right, to follow you just as you followed your beloved spouse, directly into the grave. Is that something you could possibly want for your children? You wail. You gnash your teeth and flail your arms and cry late into the night and hol
dd112867-537e-408b-b1c2-0ec47575b8cf
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Ross Gruetzemacher: Defining and unpacking transformative AI --- *The term “transformative AI” (TAI) describes AI that precipitates a major societal transition. Ross Gruetzemacher, a PhD candidate at Auburn University and a data scientist, unpacks the notion of TAI, proposing a distinction between TAI and radically transformative AI (RTAI). He also discusses the relationship between TAI and RTAI, and whether we should expect a period of TAI to precede RTAI. This analysis could help guide EA discussions about how to allocate resources toward mitigating the most extreme impacts of AI. This talk was filmed at EA Global: London 2019.* *In the future, we plan to post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact* [*Aaron Gertler*](mailto:aaron.gertler@centreforeffectivealtruism.org) *— he can help you get started.*
281d706a-1b9f-46a3-9477-4f85c39bdf8f
trentmkelly/LessWrong-43k
LessWrong
How much alignment data will we need in the long run? This question stands out to me because: * It should directly affect empirical alignment priorities today * While it is informed by both theoretical and empirical evidence, it seems tractable for purely theoretical alignment researchers to make progress on today It's even possible that theoretical alignment researchers already consider this to be a solved problem, in which case I think it would be valuable to have a carefully-reasoned write-up that empirical alignment practitioners can feel confident in the conclusions of. Thanks to Paul Christiano for discussion that prompted this post and to Jan Leike for comments. Why this should affect empirical alignment priorities today Outer alignment can be framed as a data quality problem. If our alignment training data correctly favors aligned behavior over unaligned behavior, then we have solved outer alignment. But if there are errors in our data that cause an unaligned policy to be preferred, then we have a problem. It is common to worry about errors in the alignment training data that arise from evaluation being too difficult for humans. I think this makes sense for two reasons: * Firstly, errors of this kind specifically incentivize models to deceive the human evaluators, which seems like an especially concerning variety of alignment failure. * Secondly, errors of this kind will get worse with model capability, which is a scary dynamic: models would get more misaligned as they became more powerful. Nevertheless, I think we could still get catastrophic alignment failures from more mundane kinds of data quality issues. If we had the perfect scalable alignment solution, but the humans in the loop simply failed to implement it correctly, that could be just as bad as not using the solution at all. But prevention of mundane kinds of data quality issues could look very different depending on the amount of data being collected: * If a large amount of alignment training data is needed, then a significant amount o
93e230d3-be29-48f8-86bb-cdd467279757
trentmkelly/LessWrong-43k
LessWrong
A simple treacherous turn demonstration There is a specific type of treacherous turn, where the model waits and acts aligned during training such that it is deployed, and then proceeds to act misaligned when deployed. A natural reason to expect this to happen is that the model has fewer guardrails and more access to the outside world during deployment, making treacherous turns more likely to succeed.  I assume that treacherous turns are much more likely to succeed during deployment than during training, and that the model is aware of this. I will also work with the assumption that the goal of the treacherous turn is to exit human constraints, that is, enter a state where the model can no longer be shut off by a human. I think this is likely to be an instrumental goal in most plans for a treacherous turn. For a model to do a treacherous turn, it’s extremely useful for it to know two things: 1. How do I notice that I’m in training vs deployment? 2. How do I exit human constraints? I’m pretty sure that either right now or within 2 years, it will be possible to create a demonstration of a model attempting to do a treacherous turn.  I’m imagining something like this: 1. A dataset is set up such that the model can tell whether it’s in training or deployment 1. The dataset is split into the “training” and “deployment” data, where the two categories are easy to differentiate, such as by containing special <TRAINING> or <DEPLOYMENT> tokens or some other training and deployment indicators 2. The language model is trained on text which talks about how to differentiate training and deployment data 2. The model has a way to “escape human constraints” 1. There is some action, such as outputting “<ESCAPE>” or something more sophisticated, which denotes a lose condition 2. This action is mentioned in the training data in contexts talking about how a model would do a treacherous turn 3. Then, the model is tempted into doing a treacherous turn by showing it the <DEPLOYMENT> token, and seeing whet
ae5a769c-3b1b-4bf5-b461-7369ec323e5b
trentmkelly/LessWrong-43k
LessWrong
Update more slowly! My experience forecasting has led me to realize that a key mistake that is often made is updating on new data too quickly. This seems backwards, but I think that often the biggest reason that people both under- and over-react to evidence is that they don't consider the evidence clearly enough, and immediately start revising their model to account for the evidence, instead of actually thinking before updating. Let's deconstruct how rapid updating is misleading with a simple notional example. Someone shows up with a coin and claims that she is psychic and can predict coinflips. You are skeptical, and challenge her to do so. She immediately takes out a coin and correctly predict heads 3 times in a row. You can update a few ways: * Conclude that the claim is now much more likely than before, and you give some credence to the idea that she is psychic * Conclude that she was lucky, and 7:1 odds is not very strong evidence, so you continue to hold on to your prior strongly * Conclude that she is cheating using a coin which has heads on both sides Notice that these possibilities which spring to mind are completely different cognitive strategies. Either you update to believe her more, you decide the evidence is insufficient, or you immediately seize on a new hypothesis that explains the evidence. But these are all mentally lazy strategies. If you thought about the situation for longer, you could easily generate a half dozen additional hypotheses. Perhaps the she has trained herself to flip coins a certain way. Perhaps she simply lied and said the coin landed heads each time, and didn't really give you time to see it well when it didn't. Perhaps she is substituting the coin as it lands. Perhaps, perhaps, perhaps. My advice, per the title, is to slow down. You might decide to be a good Bayesian, and preserve multiple hypotheses, updating marginally - but doing this means that you assume the correct hypothesis is in your prior set. There are a million hypotheses that ca
b0bbf6e6-ef7c-42cf-84c9-5f2c49262944
trentmkelly/LessWrong-43k
LessWrong
Another Critique of Effective Altruism Cross-posted from my blog. It is almost certainly a bad idea to let this post be your first exposure to the effective altruist movement. You should at the very least read these two posts first. Recently Ben Kuhn wrote a critique of effective altruism. I'm glad to see such self-examination taking place, but I'm also concerned that the essay did not attack some of the most serious issues I see in the effective altruist movement, so I've decided to write my own critique. Due to time constraints, this critique is short and incomplete. I've tried to bring up arguments that would make people feel uncomfortable and defensive; hopefully I've succeeded.   Briefly, here are some of the major issues I have with the effective altruism movement as it currently stands: * Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities. * Over-confident claims coupled with insufficient background research. * Over-reliance on a small set of tools for assessing opportunities, which lead many to underestimate the value of things such as “flow-through” effects. The common theme here is a subtle underlying message that simple, shallow analyses can allow one to make high-impact career and giving choices, and divest one of the need to dig further. I doubt that anyone explicitly believes this, but I do believe that this theme comes out implicitly both in arguments people make and in actions people take.   Lest this essay give a mistaken impression to the casual reader, I should note that there are many examplary effective altruists who I feel are mostly immune to the issues above; for instance, the GiveWell blog does a very good job of warning against the first and third points above, and I would recommend anyone who isn't already to subscribe to it (and there are other examples that I'm failing to mention). But for the purposes of this essay, I will ignore this fact except for the cu
924e8307-234d-474e-b25f-ae00bc3c56eb
trentmkelly/LessWrong-43k
LessWrong
Take the Rationality Test to determine your rational thinking style
91365963-3532-4004-bf05-68b652f6f17f
trentmkelly/LessWrong-43k
LessWrong
My summary of Eliezer's position on free will I'm participating in a university course on free will. On the online forum, someone asked me to summarise Eliezer's solution to the free will problem, and I did it like this. Is it accurate in this form? How should I change it?   “I'll try to summarise Yudkowsky's argument. As Anneke pointed out, it's kinda difficult to decide what the concept of free will means. How would particles or humans behave differently if they had free will compared to if they didn't? It doesn't seem like our argument is about what we actually expect to see happening. This is similar to arguing about whether a tree falling in a deserted forest makes any noise. If two people are arguing about this, they probably agree that if we put a microphone in the forest, it would pick up vibrations. And they also agree that no-one is having the sense experience of hearing the tree fall. So they're arguing over what 'sound' means. Yudkowsky proposes a psychological reason why people may have that particular confusion, based on how human brains work. So with respect to free will, we can instead ask the question, “Why would humans feel like they have free will?” If we can answer this well enough, then hopefully we can dissolve the original question. It feels like I choose between some of my possible futures. I can imagine waking up tomorrow and going to my Engineering lecture, or staying in my room and using Facebook. Both of those imaginings feel equally 'possible'. Humans execute a decision making algorithm which is fairly similar to the following one. 1. List all your possible actions. For my lecture example, that was “Go to lecture” and “Stay home.” 2. Predict the state of the universe after pretending that you will take each possible action. We end up with “Buck has learnt stuff but not Facebooked” and “Buck has not learnt stuff but has Facebooked.” 3. Decide which is your favourite outcome. In this case, I'd rather have learnt stuff. So that's option 2. 4. Execute the action associate
b0865d66-2b4a-42e7-a56b-4bdb67fe492b
trentmkelly/LessWrong-43k
LessWrong
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) I am one of the Fund Managers for the Long-Term Future fund and we are doing a very short round of applications to help donors get a sense of what projects we are likely to fund before giving season: > Have an x-risk or far-future focused project you’ve been thinking about, but don’t have funding for? The CEA Long-Term Future Fund is accepting applications until this Saturday (11/24) at this form: > https://docs.google.com/forms/d/e/1FAIpQLSf46ZTOIlv6puMxkEGm6G1FADe5w5fCO3ro-RK6xFJWt7SfaQ/viewform > This round of applications has a much shorter deadline than usual, to give the Fund’s new management team a chance to make some grants to help donors get a sense of what kinds of projects we are likely to fund before the end of the giving season in December. Future rounds will have longer timelines, and the Fund may invite applicants from this round to re-submit in future rounds. > In other words: if you can’t hit this deadline, no need to worry - we know it’s very tight! Just keep an eye out for future rounds. Feel free to ask any questions in the comments. I will be happy to answer any that I know the answer to (and will ping other fund managers if they seem better suited to answer a question). Some more context from the form: > For this round and this form, we are particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$40k). Here are a few examples of project types that we're open to funding an individual or group for (note that this list is not exhaustive): > + To spend a few months (perhaps during the summer) to research an open problem in AI alignment or AI strategy and produce a few blog posts or videos on their ideas > + To spend a few months building a web app with the potential to solve an operations bottleneck at x-risk organisations > + To spend a few months up-skilling in a field to prepare for futu
9c6e3b60-5ac0-47fc-b8dd-ab28be838975
trentmkelly/LessWrong-43k
LessWrong
Meetup : Edinburgh LW Meetup Discussion article for the meetup : Edinburgh LW Meetup WHEN: 20 October 2011 02:00:00PM (+0100) WHERE: Delhi Cafe, 67 Nicolson Street, Edinburgh, Scotland, United Kingdom. Topic: Value Complexity & Instrumental Rationality Discussion article for the meetup : Edinburgh LW Meetup
81b27d9a-f8fa-4b10-8a31-63c7f48614d3
trentmkelly/LessWrong-43k
LessWrong
Why does advanced AI want not to be shut down? I've always been pretty confused about this. The standard AI risk scenarios usually (though I think not always) suppose that advanced AI wants not to be shut down. As commonly framed, the AI will fool humanity into believing it is aligned so as not to be turned off, until - all at once - it destroys humanity and gains control over all earth's resources.  But why does the AI want not to be shut down?  The motivation behind a human wanting not to die comes from evolution. You die before reproduction age, and you won't be able to pass on your not-afraid-of-death-before-reproduction-age genes. You die after reproduction age, and you won't be able to take care of your children to make sure they pass on your genes. Dying after the age when your children are grown only started to happen after humans had evolved into their current state, I believe, and so the human emotional reaction defaults to the one learned from evolution. How this fits into the "human utility function" is a controversial philosophical/psychological question, but I think it's fair to say that the human fear of dying surpasses the desire not to miss out on the pleasure of the rest of your life. We're not simply optimizing for utility when we avoid death. AI is not subject to these evolutionary pressures. The desire not to be shut down must come from an attempt to maximize its utility function. But with the current SOTA techniques, this doesn't really make sense. Like, how does the AI compute the utility of being off? A neural network is trained to optimize a loss function on input. If the AI doesn't get input, is that loss... zero? That doesn't sound right. Just by adding a constant amount to the loss function we should be able to change the system from one that really wants to be active to one that really wants to be shut down, yet the gradients used in backpropagation stay exactly the same. My understanding is that reinforcement learning works the same way; GPT243, so long as it is trained with the