id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
5c4106ca-37c9-4ade-9916-3fbc14dc02f1
trentmkelly/LessWrong-43k
LessWrong
Breaking quarantine is negligence. Why are democracies acting like we can only ask nicely? One of the earliest angles on the coronavirus pandemic was that this was a story of the draconian extremes that an authoritarian government can impose on its people. This angle on the events in Wuhan flourished especially while the media and most governments were not taking the pandemic seriously. So we’ve had a lot of hand-wringing: what can western governments do about this, because obviously we can’t do that? At the worst extremes we have people throwing up their hands and saying, “well we’re screwed, white people will never do what they’re told, unlike those obedient Asians — and it’s not like we can make them, right?”. That attitude is completely crazy. It’s not true, and it’s part of the general atmosphere of extremely mixed messages that have made the crisis so bad. For instance, consider the Imperial College report to the British government, which modelled out the likely effects of different policies. Their model assumes that only 70% of people instructed to self-isolate after a coronavirus diagnosis will comply, for a total reduction in contacts of 75%. For stricter measures that quarantine entire households on one positive diagnosis, they assume a compliance rate of only 50%. We’re shutting down entire economies, and yet we view ourselves as helpless to enforce quarantine on confirmed or presumed positive cases? People who are tested or presumed positive will obey quarantine if they understand that they might go to prison if they don’t. This is in no way an extreme or authoritarian response. It’s completely consistent with civil liberties. Any individual freedom is always constrained by reasonable expectations of harm. None of us have a general-purpose freedom to act however we want regardless of the risks to other people. The specifics of the coronavirus pandemic are unusual, but the general principle isn’t. Take the most extreme case: someone has tested positive and been instructed to self-isolate, but the person ignores the instruction and infects s
201483cf-4501-437c-ac18-744f1675da65
trentmkelly/LessWrong-43k
LessWrong
Suffering-Focused Ethics in the Infinite Universe. How can we redeem ourselves if Multiverse Immortality is real and subjective death is impossible.  The work is only a skeleton of a more complete concept and in some places a loose collection of unprofessional thoughts. It is a translation from the original language, which is probably why there could be mistakes in it. I apologize for the underdeveloped elements of the concept, for many reasons in many places the current state is very far from perfect, I hope it will be improved in the future. Nevertheless, I felt that it would be experimentally good to publish it here temporarily.    1. Suffering-Focused Ethics: Negative Consequentialism, Efilism, Promortalism, and Antinatalism as worldviews whose goal is to reduce suffering.  The life of every sentient being contains the inherent elements of suffering. They are, at the broadest spectrum, all the negative aspects of life that can be subjectively experienced by anything with that subjectivity. As a rule, it is assumed that the more complicated the nervous system in which negative states are felt, the greater the being's ability to feel them, the complexity of the "mind" grows along with the advancement of the brain itself, which may and very likely involve an increase in the amount of negative (and relatively "positive") states that can be experienced. Perhaps as the mental capacity of a given creature increases, so does the intensity of the suffering that can be felt. We currently don't know as much as we would like to about suffering. Here are some links to possibly helpful articles: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons                              https://en.wikipedia.org/wiki/Pain_in_animals                                                               https://en.wikipedia.org/wiki/Pain_in_invertebrates    Probably the most advanced species among the minds of the earth is man. From the time of the emergence of our distant ancestors, at least a dozen species very similar to us, within the same genus Homo and more broadly, the Hominini tribe lived and died out over millions
1f02e7b5-f234-4606-bb02-2993208ade76
trentmkelly/LessWrong-43k
LessWrong
Who's welcome to our LessWrong meetups? As part of announcing meetups publically, it's good to write in the meetup description about what kind of people would likely be a good match for the meetup. I still haven't gotten a good description myself. How would you describe the kind of people we are in words that are clear to outsiders?
709c93dd-5f23-4373-9e01-6f4b0ca5413d
trentmkelly/LessWrong-43k
LessWrong
Are current LLMs safe for psychotherapy? Hi, I consider using an LLM as a psychotherapist for my mental health. I already have a human psychotherapist but I see him only once a week and my issues are very complex. An LLM such as Gemini 2 is always available and processes large amounts of information more quickly than a human therapist. I don't want to replace my human psychotherapist, but just talk to the LLM in between sessions. However I am concerned about deception and hallucinations. As the conversation grows and the LLM acquires more and more information about me, would it be possible that it intentionally gives me harmful advice? Because one of my worries that I would tell him is about the dangers of AI. I am also concerned about hallucinations. How common are hallucinations when it generates mental health information? Do hallucinations become more likely with increasing context size? ---------------------------------------- Further Questions: * Could the LLM accidentally reinforce negative thought patterns or make unhelpful suggestions? * What if the LLM gives advice that contradicts what my therapist says? How would I know what to do? * What is the risk of becoming too dependent on the LLM, and how can I check for that? * Are there specific prompts or ways of talking to the LLM that would make it safer or more helpful for this kind of use? Are there any further important things that I need to be aware of when using an LLM for mental health advice? I'm not a technical expert, so please keep explanations simple. Thank you very much.
70e716f5-b430-46df-a7f2-1414608090b2
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"In the post 'Four layers of Intellectual Conversation', Eliezer says that both the writer of an idea, and the person writing a critique of that idea, need to expect to have to publicly defend what they say at least one time. Otherwise they can write something stupid and never lose status because they don't have to respond to the criticism.I was wondering about where this sort of dialogue happens in academia. I have been told by many people that current journals are quite terrible, but I've also heard a romantic notion that science (especially physics and math) used to be more effectively pursued in the early 20th century (Einstein, Turing, Shannon, etc). So Oliver and I thought we'd look at the journals to see if they had real conversations.We looked at two data points, and didn't find any.First, Oliver looked through Einstein's publication history (Oli is German and could read it). Einstein has lots of 'reviews' of others' work in his list of publications, sometimes multiple of the same person, which seemed like a promising example of conversation. Alas, it turned out that Einstein had merely helped German journals write summaries of papers that had been written in English, and there was no real dialogue.Second, I looked through a volume of the London Mathematical Society, in particular, the volume where Turing published his groundbreaking paper proving that not all mathematical propositions are decidable (thanks to sci-hub for making it possible for me to read the papers!). My eyes looked at about 60% of the pages in the journal (about 12 papers), and not one of them disagreed with any prior work. There was :A footnote that thanked an advisor for finding a flaw in a proofAn addendum page (to the whole volume) that consisted of a single sentence thanking someone for showing one of their theorems was a special case of someone else's theoremOne person who was skeptical of another person's theorem. But that theorem by Ramanujan (who was famous for stating theorems without proofs), and the whole paper primarily found proofs of his other theorems.There were lots of discussions of people's work but always building, or extending, or finding a neater way of achieving the same results. Never disagreement, correction, or the finding of errors.One thing that really confuses me about this is that it's really hard to get all the details right. Lots of great works are filled with tiny flaws (e.g. Donald Knuth reliably has people find errors in his texts). So I'd expect any discussion of old papers to bring up flaws, or that journals would require a section at the end for corrections of the previous volume. There were of course reviewers, but they can't be experts in all the areas.But more importantly where did/does the dialogue happen if not in the journals?If I try to be concrete about what I'm curious about:As people go about the craft of doing science, they will make errors (conceptual mistakes, false proofs, and so on). One of the main pieces of infrastructure in academia are journals, where work gets published and can become common knowledge. Two places to fix errors are pre-publication and post-publication. I don't know much about the pre-publication process, but if it is strong enough to ensure no errors got published, I'd like some insight into what that process was like. Alternatively, if course-correction happened post-publication, I'm interested to know how and where, because when I looked (see above) I couldn't find it.There's also the third alternative, that no progress was made. And there's the fourth alternative, that most papers were bad and the thing scientists did was to just never read or build on them. I'm happy to get evidence for any of these, or a fifth alternative.Added: Maybe this is a more crisp statement:Why do (old) journals not claim to have errors in any of the papers? Is it because they're (implicitly) lying about the quality of the papers? Or if there's a reliable process that removed errors from 100% of papers, can someone tell me what that process was?" The above is a chunk of text from a post created by a user on the website LessWrong. What is this text talking about? This text is discussing the nature of academic discourse and error correction in scientific literature, particularly within the context of academic journals. The author is expressing curiosity about how and where dialogue and correction of errors occur in academia, especially given that they could not find examples of such in their review of certain historical publications. They note that the publications they examined contained no disagreements or corrections of prior work, despite the fact that errors are common in scientific research. This leads them to question where this critical dialogue and error correction takes place if not in the journals. They are also interested in the process of pre-publication and post-publication error correction, and how effective these processes are in ensuring the accuracy of published work. The author further questions whether the lack of errors in the journals they examined is due to a reliable process that removes errors, or if the journals are implicitly overstating the quality of the papers. They are open to evidence supporting any of these possibilities or other alternatives. Can you elaborate on the mechanisms that might exist within the academic community for informal error correction and dialogue outside of the published journals? Certainly, there are several mechanisms through which informal error correction and dialogue can occur within the academic community outside of published journals. 1. Conferences and Seminars: These are common platforms where researchers present their work and receive immediate feedback from peers. This can lead to the identification and correction of errors before the work is formally published. 2. Preprint Servers: Platforms like arXiv and bioRxiv allow researchers to share their work before it undergoes formal peer review. This provides an opportunity for the wider scientific community to review and critique the work, potentially identifying errors or oversights. 3. Collaborative Platforms: Online platforms like ResearchGate and Academia.edu allow researchers to share their work, ask questions, and engage in dialogue about research topics. These platforms can facilitate informal peer review and error correction. 4. Peer Review: While it is a formal process, it is worth mentioning because it is a crucial mechanism for error correction. Researchers scrutinize a paper before it's published, often leading to significant revisions. 5. Social Media: Platforms like Twitter are increasingly being used by academics to discuss and critique published research. While this is a very informal method, it can lead to the identification of errors that were missed during the peer review process. 6. Personal Communication: Direct communication between researchers, such as emails or face-to-face discussions, can also lead to error identification and correction. 7. Post-publication Review: Some journals offer the option of post-publication peer review, allowing for ongoing critique and potential correction of published work. 8. Replication Studies: Other researchers in the field may attempt to replicate the study to confirm its findings. If they are unable to do so, this may indicate an error in the original study. These mechanisms, while less formal than published corrections or retractions in academic journals, contribute to the self-correcting nature of science. Discuss the potential limitations and challenges of using social media platforms like Twitter for academic dialogue and error correction in scientific research. While social media platforms like Twitter can provide a valuable space for academic dialogue and error correction in scientific research, there are several potential limitations and challenges associated with their use: 1. Lack of Peer Review: Unlike formal academic journals, social media platforms do not have a structured peer review process. This means that information shared may not have been rigorously scrutinized by experts in the field. Consequently, there is a risk of misinformation or misinterpretation. 2. Accessibility and Inclusivity: Not all researchers use or have access to social media. This can lead to a lack of diverse perspectives and may exclude some researchers from the conversation. 3. Speed vs Accuracy: The fast-paced nature of social media can encourage quick responses rather than thoughtful, considered dialogue. This can lead to hasty conclusions or oversights. 4. Echo Chambers: Social media platforms often create echo chambers, where users are exposed primarily to opinions that align with their own. This can limit the diversity of feedback and hinder constructive criticism. 5. Limited Depth of Discussion: The character limits on platforms like Twitter can restrict the depth of discussion and make it difficult to fully explore complex scientific concepts or arguments. 6. Informality: The informal nature of social media communication may not lend itself to the nuanced and careful language often required in scientific discussion. 7. Public Nature: The public nature of social media platforms can lead to issues around confidentiality and the premature sharing of research findings. It can also expose researchers to unwarranted criticism or harassment. 8. Verification of Identity: It can be difficult to verify the identity and credentials of individuals on social media, which could potentially allow non-experts to weigh in on academic discussions. Despite these challenges, when used responsibly and critically, social media can complement more traditional forms of academic dialogue and contribute to the broader discussion and development of scientific research.
b0bdbb4c-25dc-4bac-b7b0-00ab2e263a1f
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Collection of GPT-3 results I kept seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd start collecting them. * first gwern's [crazy collection](https://www.gwern.net/GPT-3) of all kinds of prompts, with GPT-3 generating poetry, summarizing stories, rewriting things in different styles, and much much more. ([previous discussion](https://www.lesswrong.com/posts/3YjWifjkhw2Q8RQL2/gpt-3-fiction-samples)) * [Automatic code generation from natural language descriptions](https://twitter.com/sharifshameem/status/1282676454690451457). "Give me a page with a table showing the GDP of different nations, and a red button." * [Building a functioning React app](https://twitter.com/sharifshameem/status/1284095222939451393) by just describing it to GPT-3. * Taking a brief technical tweet about GPT-3 and [expanding it to an essay](https://twitter.com/Plinz/status/1283211048145711104) which the author of the original tweet mostly endorses. * Acting as a more intense therapist than ELIZA ever was. [[1](https://twitter.com/nicklovescode/status/1283300424418619393), [2](https://twitter.com/nicklovescode/status/1283326066338062337)] * On the other hand, you can [trick GPT-3 into saying nonsense](http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html). On the other hand, you can just [prompt it to point out the nonsense](https://twitter.com/nicklovescode/status/1284050958977130497). * Redditor shares an "AI Dungeon" game played with the new GPT-3 -based "Dragon Model", [involving a cohesive story](https://www.reddit.com/r/AIDungeon/comments/hpkqij/using_the_dragon_module_i_just_generated_what_is/) generated in response to their actions, with only a little manual editing. + The [official Dragon Model](https://medium.com/@aidungeon/ai-dungeon-dragon-model-upgrade-7e8ea579abfe) announcement. + I was a little skeptical about some of these GPT-3 results until I tried the Dragon Model myself, and had it [generate cohesive space opera](https://kajsotala.fi/2020/07/gpt-3-space-opera/) with almost no editing. * [Another example](https://twitter.com/hturan/status/1282261783147958272) of automatically generated code, this time giving GPT-3 a bit of React code defining a component called "ThreeButtonComponent" or "HeaderComponent", and letting it write the rest. * From a brief description of a medical issue, GPT-3 [correctly generates](https://twitter.com/QasimMunye/status/1278750809094750211) an explanation indicating that it's a case of asthma, mentions a drug that's used to treat asthma, the type of receptor the drug works on, and which multiple-choice quiz question this indicates. * GPT-3 tries to get a software job, and comes [close to passing a phone screen](https://twitter.com/lacker/status/1279136788326432771). * Translating natural language descriptions [into shell commands](https://twitter.com/harlandduman/status/1282132804034150400), and [vice versa](https://twitter.com/harlandduman/status/1282198829136097281). * Given a prompt with a few lines of dialogue, [GPT-3 continues the story](https://andrewmayneblog.wordpress.com/2020/06/11/the-aichannels-project/), incorporating details such as having a character make 1800s references after it was briefly mentioned that she's a nineteenth-century noblewoman. * Turning [natural language into lawyerese](https://twitter.com/f_j_j_/status/1283349995144359937). * Using GPT-3 to [help you with gratitude journaling](https://twitter.com/nicklovescode/status/1283740861260369920). * [Source](https://yuki.la/vg/299570235) is an anonymous image board poster so could be fake, but: if you give an AI Dungeon character fake wolf ears and then ask her to explain formal logic to you, [she may use the ears in her example](https://pbs.twimg.com/media/EdMh-FaXkAA1nLt?format=png&name=large). + Even after seeing all the other results, I honestly have difficulties believing that [this one](https://twitter.com/kleptid/status/1284098635689611264) is real. * Of course, [even GPT-3 fumbles sometimes](https://aiweirdness.com/post/621186154843324416/all-your-questions-answered).
eb2cdedb-6c01-4837-8d0e-ca177f4e35d8
StampyAI/alignment-research-dataset/special_docs
Other
Selecting the Partial State Abstractions of MDPs: A Metareasoning Approach with Deep Reinforcement Learning. Selecting the Partial State Abstractions of MDPs: A Metareasoning Approach with Deep Reinforcement Learning Samer B. Nashed1∗, Justin Svegliato2∗, Abhinav Bhatia1, Stuart Russell2, Shlomo Zilberstein1 Abstract — Markov decision processes (MDPs) are a common general-purpose model used in robotics for representing se- quential decision-making problems. Given the complexity of robotics applications, a popular approach for approximately solving MDPs relies on state aggregation to reduce the size of the state space but at the expense of policy fidelity—offering a trade-off between policy quality and computation time. Naturally, this poses a challenging metareasoning problem: how can an autonomous system dynamically select different state abstractions that optimize this trade-off as it operates online? In this paper, we formalize this metareasoning problem with a notion of time-dependent utility and solve it using deep reinforcement learning. To do this, we develop several general, cheap heuristics that summarize the reward structure and transition topology of the MDP at hand to serve as effective features. Empirically, we demonstrate that our metareasoning approach outperforms several baseline approaches and a strong heuristic approach on a standard benchmark domain. I. I NTRODUCTION MDPs are a common general-purpose model used in robotics for representing sequential decision-making prob- lems [23]. However, the complexity of solving MDPs scales poorly with the number of features reasoned about in the environment, limiting their applicability. To address this limitation, a range of approximate solvers for MDPs have been proposed that seek to trade a small reduction in policy quality for a large reduction in computation time. A particularly effective approximate solver for MDPs, recently proposed by Nashed et al. [17], solves a sequence of partially abstract MDPs in order to solve an MDP. In a partially abstract MDP, some states are considered at maximum fidelity while other states are considered at lower fidelity using an abstract representation. This can greatly reduce the size of the state space while still resulting in a near optimal policy by using a detailed representation for states where it is most necessary. Still, for a partially abstract MDP to be effective, it requires a suitable abstraction function that maps a state in an MDP to an abstract state in an abstract MDP. Since there has been substantial work on generating abstraction functions for planners, ranging from symbolic planners [6], [27], [13] to stochastic planners [1], [7], [26], this paper assumes that a suitable abstraction function already exists via either learning or careful expert design. Given a specific abstraction function, a partially abstract MDP uses an expansion strategy to determine which states This work was supported by NSF grants IIS-1813490 and IIS-1954782. ∗Both authors contributed equally. 1University of Massachusetts, Amherst, MA, USA. Emails: {snashed, abhinavbhati, shlomo }@cs.umass.edu 2University of California, Berkeley, CA, USA. Emails: {jsvegliato, russell }@berkeley.edu Fig. 1. Two partially abstract MDPs that were constructed using different expansion strategies: a cheap expansion strategy that often results in lower policy quality ( top) and an expensive expansion strategy that often results in higher policy quality ( bottom ). Small circles are ground states, large circles are abstract states, and arrows are transitions between states. The red ground state is the current state, the green ground states have high reward, and the green abstract states contain a ground state that has high reward. of the MDP to reason about at maximum fidelity and which states of the MDP to reason about at lower fidelity. Concretely, this means that the expansion strategy selects the abstract states to expand in the partially abstract MDP. Ideally, any approach to selecting an expansion strategy (illustrated in Fig. 1) should optimize a formal notion of time-dependent utility by managing the trade-off between policy quality and computation time given the domain of operation and the available computational resources. Most importantly, such an approach should generalize to any MDP and require little knowledge in the details of that MDP. In this paper, we therefore (1) offer a metareasoning approach to selecting different expansion strategies that optimizes a formal notion of time-dependent utility and (2) express it as deep reinforcement learning problem. Moreover, we (3) propose several general, cheap heuristics that summa- rize the reward structure and transition topology of the MDP at hand to generate effective features for deep reinforcement learning. Empirically, we demonstrate that our metareasoning approach outperforms several baseline approaches and a strong heuristic approach on a standard benchmark domain. II. R ELATED WORK There are many approaches to approximately solving MDPs, including performing dynamic programming, using partial policies, and employing state abstractions. See [17] for a thorough discussion of how these approaches relate to partially abstract MDPs. Similarly, reinforcement learning with time-dependent utility, introduced by Horvitz [10], has been used in a variety of metareasoning problems [24], [22], [4]. Although our approach uses reinforcement learning and time-dependent utility similar to this work, we focus on a different problem—that of learning how to select different expansion strategies dynamically online during operation. Online planning and learning over different abstractions in general is a common problem across many areas of artificial intelligence and encompasses several related sub- problems. These include handling the non-Markovian nature of state and action abstractions [2], learning context-specific independences present in certain tasks [5], and learning temporal abstractions in the form of progressively more abstract skill controllers [14]. However, here, we restrict our attention to state abstractions in the form of state aggregation where multiple states in a larger (ground) problem form a single state in a smaller (abstract) problem. Online selection of state abstractions has been studied in the context of both reinforcement learning and planning. In reinforcement learning, abstractions are generally used if the state space is large and training data is sparse, which leads to poor experiential coverage. Methods include learning the best state abstraction from a set of state abstractions via hypothesis testing [12] and dynamically selecting state abstractions of increasing granularities based on confidence intervals of Q-values [25]. In planning, similar techniques have been to applied to sample-based tree search algorithms. For example, the PARSS algorithm is an algorithm that adjusts state abstractions during tree search by starting with coarse state abstractions and refining them given the variance of the Q-values over actions at a specific abstract state [11]. An extensive body of research has investigated reasoning over state abstractions during planning. Early work proposed a hierarchy of state abstractions, represented as factored semi-MDPs, that may have multiple intermediate state ab- stractions that can be swapped in and out depending on the environment [21]. Later work proposed algorithms for dynamically eliminating state factors in states that were estimated to not impact the policy by comparing two partially abstract policies made with different state abstractions [3]. Finally, there have been specific applications, such as multi- agent planning, where specialized partition schemes have been introduced and adapted to an online setting [16]. This paper proposes a metareasoning framework that takes advantage of powerful deep reinforcement learning methods to learn a policy for selecting different expansion strategies. Following work on SSPs, we use general, cheap heuristic features that avoid relying on the specifics of an MDP. Most importantly, we define this as a metareasoning problem that optimizes a formal notion of time-dependent utility.III. B ACKGROUND In this section, we review the formal definitions of a ground MDP, an abstract MDP, and a partially abstract MDP. a) Ground MDPs: A ground MDP is a tuple M= ⟨S, A, T, R ⟩. The space of states is S. The space of actions isA. The transition function T:S×A×S→[0,1] represents the probability of reaching a state s′∈Safter performing an action a∈Ain a state s∈S. The reward function R:S×A→Rrepresents the immediate reward of performing an action a∈Ain a state s∈S. A solution is a policy π:S→Aindicating that an action π(s)∈Ashould be performed in a state s∈S. A policy πinduces a value function Vπ:S→Rrepresenting the expected discounted cumulative reward Vπ(s)∈Rfor each state s∈Sgiven a discount factor 0≤γ < 1. An optimal policy π∗maximizes the expected discounted cumulative reward for each state s∈Sgiven the equation V∗(s) = max a∈A R(s, a) +γP s′∈ST(s, a, s′)V∗(s′) . b) Abstract MDPs: Specifying an abstract MDP ¯Mof a ground MDP Mrequires two functions [15]. First, an abstraction function ϕ:S→¯Smaps a ground state s∈S to an abstract state ¯s∈¯S. Second, an inverse abstraction function ϕ−1:¯S→ P(S)maps an abstract state ¯s∈¯Sto a set of ground states S⊆ P(S), where P(S)is the power set of S. The condition ϕ(s) = ¯s⇔s∈ϕ−1(¯s)must hold for each ground state s∈Sand abstract state ¯s∈¯S. An abstract MDP is a tuple ¯M=⟨¯S, A, ¯T,¯R⟩[15]. The space of abstract states is ¯S={ϕ(s)|s∈ S}such that a set of ground states Sis abstracted by an abstraction function ϕ. The space of ground actions isA. The abstract transition function is ¯T(¯s, a,¯s′) =P s∈ϕ−1(¯s)ψ(s)P s′∈ϕ−1(¯s′)T(s, a, s′). The abstract reward function is ¯R(¯s, a) =P s∈ϕ−1(¯s)ψ(s)R(s, a). Note that a weighting function ψ:S→[0,1]represents the probability of being in a ground state s∈Sin an abstract state ϕ(s)∈¯S. c) Partially Abstract MDPs: A partially abstract MDP ˜Mcombines a ground MDP Mand an abstract MDP ¯M as a tuple ˜M=⟨˜S, A, ˜T,˜R⟩[17]. The space of partially abstract states is ˜S=α∪βwith a set of ground states α={ϕ−1(¯s)|¯s∈Γ}and a set of abstract states β={¯S\Γ} such that a set of expanded abstract states Γ⊆¯Sis expanded by an inverse abstraction function ϕ−1. The space of ground actions is A. The partially abstract transition function ˜T: ˜S×AטS→[0,1]is composed of the ground/abstract transition functions Tand¯T: ˜T(˜s, a,˜s′) =  T(˜s, a,˜s′) if˜s∈α,˜s′∈αP s′∈ϕ−1(˜s′)T(˜s, a, s′) if˜s∈α,˜s′∈βP s∈ϕ−1(˜s)ψ(s)T(s, a,˜s′)if˜s∈β,˜s′∈α ¯T(˜s, a,˜s′) if˜s∈β,˜s′∈β The partially abstract reward function ˜R:˜S×A→Ris composed of the ground/abstract reward functions Rand¯R: ˜R(˜s, a) =( R(˜s, a)if˜s∈α ¯R(˜s, a)if˜s∈β IV. S ELECTING PARTIAL STATE ABSTRACTIONS The problem of selecting an expansion strategy to de- termine the abstract states to expand in a partially abstract MDP involves managing the trade-off between policy quality and computation time. We frame this as a metareasoning problem, the main advantage being that it expresses this trade-off in terms of time-dependent utility, providing deep reinforcement learning with an appropriate objective. Meth- ods for similar metareasoning problems typically use heuris- tics based on statistical measures to manage this trade-off. Here, we introduce the first approach that selects expansion strategies for partially abstract MDPs decision-theoretically. A. Metareasoning for Partial State Abstractions We begin by introducing the metareasoning problem for partial state abstractions. This problem requires a time- dependent utility that represents the utility of a policy in terms of its quality and computation time. Intuitively, a policy of a specific quality computed in a second has higher utility than a policy of the same quality computed in an hour. A time-dependent utility is therefore expressed as the difference between an intrinsic value that reflects the utility of a policy given its quality (but not computation time) and a time cost that reflects the utility of a policy given its computation time (but not quality) [10]. We define this function below. Definition 1. Given a policy of quality q∈Φand compu- tation time t∈Ψ, atime-dependent utility U: Φ×Ψ→R can be expressed as the difference between two functions U(q, t) =UI(q)−UC(t)where UI: Φ→R+is the intrinsic value andUC: Ψ→R+is the time cost . Given this time-dependent utility, the one-step metareason- ing problem for partial state abstractions is the problem of selecting the abstract states to expand in a given partially abstract MDP. Naturally, a solution to this problem must optimize time-dependent utility: we must select the abstract states to expand in the partially abstract MDP that balances the quality and computation time of its resulting policy. Formally, given a set of abstract states Γi∈ P(¯S)to expand in a partially abstract MDP ˜Miand its resulting policy πiof policy quality q(πi)and computation time t(πi), this one- step metareasoning problem is as follows. arg max Γi∈P(¯S)U(q(πi), t(πi)) This can be challenging to solve given substantial uncertainty over the policy πiresulting from a partially abstract MDP ˜Mithat expands the abstract states Γi∈ P(¯S). In real-time settings, an autonomous system often lazily plans and acts online. Hence, during operation, we assume that the autonomous system is either (1) executing an old local policy πwhen it encounters a visited current state sor (2) solving for a new local policy π′when it encounters an unvisited current state s′. We can therefore view the union of each local policy πias a joint global policy πΥas in our recent work [17] that grows in quality and computation time with each local policy πi. Intuitively, this presents asequential metareasoning problem for selecting the abstract states to expand in a sequence of partially abstract MDPs where the resulting local policies πiof each partially abstract MDP ˜Mitogether compose a joint global policy πΥthat must optimize time-dependent utility. Formally, given the abstract states Υ = [Γ 1, . . . , Γh]expanded in a sequence of partially abstract MDPs [˜M1, . . . , ˜Mh]over the unvisited states{s1, . . . s h} ∈Shand the joint global policy πΥof quality q(πΥ)and computation time t(πΥ), this sequential metareasoning problem is as follows. arg max ΥU(q(πΥ), t(πΥ)) In practice, it is often beneficial to approximate this sequential metareasoning problem as a sequence of inde- pendent one-step metareasoning problems as follows. arg max Γ1∈P(¯S)U(q(π1), t(π1)) +···+ arg max Γh∈P(¯S)U(q(πh), t(πh)) B. Reinforcement Learning for Partial State Abstractions We now cast the sequential metareasoning problem for partial state abstractions as an MDP. Each time an unvisited state si∈Sis encountered, the MDP must select the abstract states Γito expand in the partially abstract MDP ˜Mi. Intuitively, the states include the quality and computation time of the current joint global policy along with the reward structure and transition topology of the ground MDP and abstract MDP while the actions include expansion strategies that select the abstract states to expand in the partially abstract MDP. We define this metareasoning problem below. Definition 2. The sequential metareasoning problem for partial state abstractions is a tuple ⟨Φ,Ψ, F,ˆS,ˆA,ˆT,ˆR⟩ given a ground MDP Mand an abstract MDP ¯M: •Φ ={q0, q1, . . . , q NΦ}is a set of qualities. •Ψ ={t0, t1, . . . , t NΨ}is a set of computation times. •F=F0×F1× ··· × FNFis a set of features that summarize the reward structure and transition topology of the ground MDP Mand abstract MDP ¯M. •ˆS= Φ×Ψ×Fis a set of states of computation: each state s∈ˆSreflects the current joint global policy πΥ of quality q(πΥ)∈Φand computation time t(πΥ)∈Ψ. •ˆAis a set of actions of computation: the set of expansion strategies that each select different abstract states Γito expand in a partially abstract MDP ˜Mi. •ˆT:ˆS׈A׈S→[0,1]is an unknown transition function that represents the probability of reaching state s′= (q′, t′, f′)∈ˆSafter performing action a∈ˆAin state s= (q, t, f )∈ˆS. •ˆR:ˆS׈A׈S→Ris a reward function that represents the expected immediate reward, ˆR(s, a, s′) =U(q′, t′)− U(q, t), of reaching state s′= (q′, t′, f′)∈ˆSafter performing action a∈ˆAin state s= (q, t, f )∈ˆS. Note that the reward function is consistent with the objective of optimizing the time-dependent utility: execut- ing a sequence of expansion strategies until a joint global policy πΥof quality q(πΥ)∈Φand computation time t(πΥ)∈Ψemits a cumulative reward equal to the time- dependent utility U(q(πΥ), t(πΥ)). This is a form of reward shaping —equivalent to emitting a reward of U(q, t)once at the end of an episode in terms of the objective—that guides reinforcement learning with a reward at each time step [18]. We use deep reinforcement learning to learn an optimal metareasoning policy by performing a series of simulations that each use an expansion strategy to select the abstract states to expand in a sequence of partially abstract MDPs. Here, an agent learns a policy as a neural network by performing actions and observing rewards in the environ- ment, making it a good fit for metareasoning for three rea- sons. First, by balancing exploitation and exploration, it can learn how to select an expansion strategy given the reward structure and transition topology of the ground MDP and abstract MDP. Next, by ignoring large unreachable regions of the state space, it can reduce the overhead of learning which expansion strategy to select. Finally, by using a neural network that extracts the relationship between large input and output spaces, it can encode the effects of an expansion strategy on the resulting policy of a partially abstract MDP in a way that generalizes to novel states of computation. C. Calculating Time-Dependent Utility Typically, in metareasoning, a solution quality qis defined as the approximation ratio, q=c∗ c, where c∗is the cost of the optimal solution and cis the cost of the given solution. However, since computing the cost of an optimal solution to a complex problem is often infeasible, a solution quality can be estimated as the approximation ratio, q=¯c∗ c, where ¯c∗is a lower bound on the cost of the optimal solution and cis the cost of the given solution. Generally, a solution quality q= 0 means no solution was computed while a solution quality q= 1 means an optimal solution was computed. We need a specific definition of solution quality in the context of MDPs. Here, the quality q(π)of a policy πis defined as the approximation ratio, q(π) =Vπ V∗=P s∈Sd(s)Vπ(s)P s∈Sd(s)V∗(s), where Vπis the value function of the policy πandV∗is the value function of the optimal policy π∗, given a probability d(s)of starting in a state s∈S. However, since computing the value of an optimal policy of a complex MDP is often infeasible, the optimal value function V∗must be replaced with an upper bound on the value function ¯V∗. Given the quality q(πΥ)and computation time t(πΥ)of the current joint global policy πΥ, we can define the time- dependent utility U(q(πΥ), t(πΥ))using an intrinsic value UI(q(πΥ))and a time cost UC(t(πΥ)). First, given a tunable parameter α, we model the intrinsic value as UI(q(πΥ)) = αq(πΥ). Second, given a tunable parameter β, we model the time cost as UC(t(πΥ)) =P i∈h[eβt(πi)−1]such that πiis the local policy solved for the unvisited states {s1, . . . s h} ∈ Sh. The rates αandβare typically given in the problem depending on the value and urgency for a policy [8]. Given this time-dependent utility, it is possible to ex- press the reward function of the metareasoning problem.Formally, given the current state of computation s= (q(πΥ), t(πΥ),·)∈ˆSand the successor state of computation s′= (q(π′ Υ), t(π′ Υ),·)∈ˆSthat reflect the current joint global policy πΥand successor joint global policy π′ Υalong with an expansion strategy a∈ˆAused to solve for a new local policy πthat improves the joint global policy πΥ, we can express the reward function in the following way. ˆR(s, a, s′) =U(q(π′ Υ), t(π′ Υ))−U(q(πΥ), t(πΥ)) =α[q(π′ Υ)−q(πΥ)]−eβt(π) V. R EPRESENTING THE STATE OF COMPUTATION In this section, we introduce 6 features that compose the state of computation in the sequential metareasoning problem for partial state abstractions. These features can easily be computed for a ground MDP Mand abstract MDP ¯Mand reflect their reward structure ortransition topology . A. Reward Structure We define 3 features below describing the availability of immediate reward around the current ground/abstract state. 1)Reward Frequency :The feature f1is the number of positive reward ground states reachable within hactions of the current ground state normalized by the total number of reachable ground states. 2)Reward Proximity :The feature f2is the minimum number of actions required to reach the nearest positive reward ground state from the current ground state normalized by the diameter diam( M) of the ground MDP M. 3)Reward Information :A main weakness of state ab- stractions is that they induce artificial information boundaries within the state space. For example, when a set of ground states is compressed into an abstract state, the abstract MDP loses information about any ground state that has successor ground states in other abstract states. This is because succes- sor ground states may be aggregated with other ground states that are not reachable in a single action, which is detrimental when a ground state with high reward successor ground states can no longer be distinguished from a ground state without high reward successor ground states. Therefore, the feature f3is1/(1 +|diam(¯s)−δ|), where diam( ¯s) is the diameter of the graph of the ground states in the current abstract state ¯s∈¯Sandδis the distance to the nearest high reward ground state such that this value approaches 1or0as these ground states move toward or away from this boundary. B. Transition Topology We define 3 features below describing the local transition topology surrounding the current ground/abstract state. 1)Transition Entropy :The feature f4is the entropy of the abstract successor state distribution at the current abstract state assuming that actions are selected randomly. This is a rough measure of the probability that actions performed at the current abstract state will transition to different abstract states that may be worth reasoning over more closely. A higher entropy at the current abstract state indicates a higher probability of transitioning to different abstract states. Fig. 2. An example of (k, h)-reachability where k= 3 andh≤4. The green region is allstates reachable from a given ground state swithin k actions. The yellow /redregions are the states from which the set of important ground states SGis still reachable/unreachable within hactions. 2)State Visitation :The feature f5is the expected dis- counted number of times that the current abstract state will be visited given an abstract start state distribution and an optimal abstract policy. We compute this for all abstract states ¯s′∈¯S by performing dynamic programming with the equation λ(¯s′) =¯d(¯s′) +γX ¯s∈¯S¯T(¯s,¯π∗(¯s),¯s′)λ(¯s), where ¯π∗(¯s)is the optimal abstract policy and ¯d(¯s′)is the abstract start state distribution. 3)Important Ground State Reachability :The feature f6 is a novel measure of reachability from a given ground state to a set of nearby important ground states. Definition 3. A set of important ground states SGis(k,h)- reachable from a given ground state sif, after performing any arbitrary sequence of actions a1, . . . , a k, there is at least one important ground state sg∈SGthat is still reachable within hactions given a probability ϵ >0. Similar to recent work on SSPs [20], [19], this measure establishes an envelope (illustrated in Fig. 2) of ground states in which the probability of reaching a set of important ground states SGfrom a given ground state sis always greater than zero. In general, while certain MDPs may have transition topologies that permit calculating (k, h)-reachability exactly, it usually must be estimated. To do this, we provide a constant time estimation procedure in Algorithm 1. Here, the accuracy of the estimate improves with the number of samples parameterized by nandm. We choose kto be proportional to the diameter of the current abstract state as this is the maximum number of actions that can performed by the local policy solved for the current abstract state. VI. E XPERIMENTS We now evaluate the proposed approach (D QN) against a set of baseline approaches on a standard benchmark domain. a) Hypothesis: Any approach to the metareasoning problem for partial state abstractions should try to optimize time-dependent utility by selecting the abstract states to expand in a sequence of partially abstract MDPs. Ideally, the approach should identify two cases. First, there are cases in which cheap and expensive expansion strategiesAlgorithm 1: ESTIMATE (k, h)-REACHABILITY 1:Input: An MDP M, a ground state s, a set of important ground states SG, and the parameters k,h,n, and m 2:Output: The probability that the set of important ground states SG are(k, h)-reachable from the ground state s 3:Sk← ∅ 4:fori∈ {1, . . . , n }do 5: s′←s 6: forj∈ {1, . . . , k }do 7: s′←SIMULATE RANDOM ACTION (M, s′) 8: Sk←Sk∪ {s′} 9:σ←0,ρ← ∅ 10:forsk∈Skdo 11: fori∈ {1, . . . , m }do 12: s′←sk 13: forj∈ {1, . . . , h }do 14: s′←SIMULATE RANDOM ACTION (M, s′) 15: ifs′∈SGor∃p∈ρsuch that s′∈pthen 16: σ←σ+ 1 17: ρ←ρ∪PATH(sk, s′) 18: break 19:return σ/n result in roughly equal policy quality , reducing computation time at negligible sacrifice to policy quality. Second, there are cases in which expensive expansion strategies result in much higher policy quality than a cheap expansion strategy, boosting policy quality at marginal amortized computation time. Our hypothesis is that the proposed approach will learn to exploit these two cases and hence optimize time- dependent utility beyond the baseline approaches. b) Experimental Setup: All approaches were evaluated on100random simulations. For each simulation, we record three metrics: the values for the policy quality ,computation time, and time-dependent utility of the final policy. The proposed approach was trained on1000 random simulations using deep Q-learning with standard settings. The neural network has two hidden layers of 64and32nodes with ReLU activation and a linear output layer of 3nodes. The step size is 0.0001 . The exploration strategy is ϵ-greedy action selection with an exploration probability ϵthat is annealed from 1to0.1over 1000 episodes. The experience buffer capacity is ∞. The number of steps is 20000 . The buffer ini- tialization period is 200. The target network update interval is1000 . The minibatch size is 64. All simulations for training and evaluation were generated using different randomization seeds to measure generalizability to unfamiliar simulations. c) Standard Benchmark Domain: We consider the Earth observation domain proposed in early work on ground MDPs [9] and recently modified in recent work on partially abstract MDPs [17]. In this domain, a satellite orbiting Earth indefinitely must take photos of points of interest Pwith weather levels Wthat change stochastically. The satellite starts at longitude x∈Xwith its camera focused at latitude y∈Y. Given the rates ∆Yand∆X, the satellite can then either do N OOPERATION , shift its camera N ORTH to latitude (y+ ∆ Y)∈Y, shift its camera S OUTH to latitude (y−∆Y)∈Y, or take an I MAGE of a point of interest at latitude y∈Yand longitude x∈X. Concurrent to each action, the satellite orbits from east to west described by Naive Greedy Proactive Hybrid DQN Approach0.00.51.0Policy Quality Computation Time [seconds] Time-Dependent UtilityFig. 3. Left: The policy quality of the final policy relative to the optimal policy over all evaluation simulations for each approach. Center: The frequency of computation times of the final policy over all evaluation simulations for each approach. Right: The distribution of time-dependent utilities of the final policy over all evaluation simulations for each approach. Putting these figures together, we observe that the proposed approach optimizes time-dependent utility more effectively than the baseline approaches by learning how to manage the trade-off between policy quality and computation time. Fig. 4. An example policy for the proposed approach that selects expansion strategies. There are eight abstract states that each contain 2×4ground states where hatch marks denote a point of interest. Each band within an abstract state represents a specific expansion strategy: blue for N AIVE ,orange for GREEDY , and green for P ROACTIVE such that darker shading denotes higher probability. This policy shows how the proposed approach exploits reward structure and transition topology of the earth observation MDP to dynamically select the expansion strategy optimizing time-dependent utility. longitude ((x+ ∆ X) mod |X|)∈Xwhere the modulo operator creates periodic boundary conditions to represent continuous orbits around earth. Most importantly, given the IMAGE action, the satellite earns a reward proportional to image quality such that image quality is a function of the weather w∈W. The formal definitions of the ground, abstract, and partially abstract MDPs are in recent work [17]. d) Baseline Approaches: We consider pure and hybrid approaches that expand the current abstract state anda set of informative abstract states . The N AIVE approach expands no informative abstract states. The G REEDY approach expands informative abstract states that contain a point of interest within 1abstract state of the current abstract state. The PROACTIVE approach expands informative abstract states that are reachable from the current abstract state to any abstract state that contains a point of interest within 2abstract states of the current abstract state. The H YBRID approach uses either the N AIVE , GREEDY , or P ROACTIVE approach depending on the (k, h)-reachability of the current ground state and the occupancy frequencies of the abstract MDP. e) Experimental Results: Fig. 3 shows that the pro- posed approach optimizes time-dependent utility beyond the baseline approaches. First, N AIVE , GREEDY , and P ROAC - TIVE exhibit poor time-dependent utility ( 29.6,36.6,32.2). This is because they lead to either high policy quality in too much computation time or low policy quality. Next, H YBRID exhibits better time-dependent utility ( 40.7) by heuristically reasoning over expansion strategies via careful expert design. Finally, D QNexhibits the best time-dependent utility ( 44.7) by performing explicit optimization using deep reinforce- ment learning. Overall, the proposed approach decision- theoretically selects expansion strategies (like in Fig. 4) based on whether investing computation time would result in a worthwhile improvement in policy quality.VII. C ONCLUSION This paper introduces the metareasoning problem of se- lecting different expansion strategies and solves it using deep reinforcement learning with several general, cheap heuristics that reflect the MDP at hand. Empirically, we show that our metareasoning approach outperforms several baseline approaches and a strong heuristic approach on a standard benchmark domain. In future work, we will explore the generalizability of this work to MDPs of varied topologies. REFERENCES [1] D. Abel, D. Arumugam, L. Lehnert, and M. Littman. State abstractions for lifelong reinforcement learning. In ICML , 2018. [2] A. Bai, S. Srivastava, and S. J. Russell. Markovian state and action abstractions for MDPs via hierarchical MCTS. In IJCAI , 2016. [3] J. Baum, A. E. Nicholson, and T. I. Dix. Proximity-based non-uniform abstractions for approximate planning. JAIR , 43, 2012. [4] A. Bhatia, J. Svegliato, S. B. Nashed, and S. Zilberstein. Tuning the hyperparam- eters of anytime planning: A metareasoning approach with deep reinforcement learning. In ICAPS , 2022. [5] R. Chitnis, T. Silver, B. Kim, et al. CAMPS: Learning context-specific abstractions for efficient planning. arXiv:2007.13202 , 2020. [6] N. S. Flann. Learning appropriate abstractions for planning in formation problems. In 6th ICML , 1989. [7] X. Fu, G. Yang, P. Agrawal, and T. Jaakkola. Learning task informed abstractions. In38th ICML , 2021. [8] E. A. Hansen and S. Zilberstein. Monitoring and control of anytime algorithms. AIJ, 126(1-2):139–157, 2001. [9] A. Hertle, C. Dornhege, T. Keller, R. Mattm ¨uller, et al. An experimental comparison of classical, fond and probabilistic planning. In KI, 2014. [10] E. Horvitz and G. Rutledge. Time-dependent utility and action under uncertainty. In7th UAI , 1991. [11] J. Hostetler, A. Fern, and T. Dietterich. Sample-based tree search with fixed and adaptive state abstractions. JAIR , 60, 2017. [12] N. Jiang, A. Kulesza, and S. Singh. Abstraction selection in model-based reinforcement learning. In ICML , 2015. [13] C. A. Knoblock. Generating abstractions for planning. AIJ, 68(2), 1994. [14] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto. Robot learning from demonstration by constructing skill trees. IJR, 31(3), 2012. [15] L. Li, T. J. Walsh, and M. L. Littman. Towards a unified theory of state abstraction for MDPs. In ISAIM , 2006. [16] A. Ma, M. Ouimet, and J. Cort ´es. Dynamic domain reduction for multi-agent planning. In MRS , 2017. [17] S. B. Nashed, J. Svegliato, M. Brucato, C. Basich, R. Grupen, and S. Zilberstein. Solving Markov decision processes with partial state abstractions. In ICRA , 2021. [18] A. Y . Ng, D. Harada, and S. J. Russell. Policy invariance under reward transformations. In ICML , 1999. [19] L. Pineda and S. Zilberstein. Soft labeling in stochastic shortest path problems. In18th AAMAS , 2019. [20] L. E. Pineda, K. H. Wray, and S. Zilberstein. Fast SSP solvers using short-sighted labeling. In 31st AAAI , 2017. [21] K. Steinkraus and L. P. Kaelbling. Combining dynamic abstractions in large MDPs. Technical report, MIT, 2004. [22] J. Svegliato, P. Sharma, and S. Zilberstein. A model-free approach to meta-level control of anytime algorithms. In ICRA , 2020. [23] J. Svegliato, K. H. Wray, S. J. Witwicki, J. Biswas, and S. Zilberstein. Belief space metareasoning for exception recovery. In IROS , 2019. [24] J. Svegliato, K. H. Wray, and S. Zilberstein. Meta-level control of anytime algorithms with online performance prediction. In 27th IJCAI , 2018. [25] M. Tamassia, F. Zambetta, W. L. Raffe, F. Mueller, and X. Li. Dynamic choice of state abstraction in Q-learning. In ECAI , 2016. [26] M. Tomar, A. Zhang, R. Calandra, M. E. Taylor, and J. Pineau. Model-invariant state abstractions for model-based RL. arXiv:2102.09850 , 2021. [27] A. Unruh and P. S. Rosenbloom. Abstraction in problem solving and learning. In11th IJCAI , 1989.
e54ec9ca-5571-4ea8-9eb1-954f0ae49a3c
trentmkelly/LessWrong-43k
LessWrong
Open & Welcome Thread November 2021 If it’s worth saying, but not worth its own post, here's a place to put it. If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section. The Open Thread tag is here. The Open Thread sequence is here.
49e8732c-f32a-4f82-bdb3-d9dae0f5e106
trentmkelly/LessWrong-43k
LessWrong
Meetup : Ann Arbor Meetup - singing Discussion article for the meetup : Ann Arbor Meetup - singing WHEN: 26 March 2016 07:00:00PM (-0400) WHERE: 701 East university Ave Ann arbor, Room 1506 We'll be singing together. Nothing else scheduled. Someone (great speaking voice (we'll hear his singing this week)) at the last meetup offered to give us some brief vocal instruction. He says it'll be better if you know what song you want to sing in advance. Come in the Church St entrance; it's a block south of Pizza House and Amer's where we've met before. We'll be in room 1506 to the right. Discussion article for the meetup : Ann Arbor Meetup - singing
d1c7b31e-6043-49c5-a3ba-1b694d6f9814
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Why we need a new agency to regulate advanced artificial intelligence [Anton Korinek](https://forum.effectivealtruism.org/users/akorinek)'s report (December 8, 2021) presents two control problems with advanced AI: * Direct control, that an AI system does what its operator wants it to do * Social control, that an AI system does what the rest of society wants it to do He uses the [Facebook Files](https://www.wsj.com/articles/the-facebook-files-11631713039) to show examples of failures in both direct and social control. To solve these problems, Korinek proposes a new federal agency to regulate advanced AI: > Throughout our history, whenever we developed new technologies that posed new hazards for society, our nation has made it a habit to establish new regulatory bodies and independent agencies endowed with world-class expertise to oversee and investigate the new technologies. For example, the National Transportation Safety Board (NTSB) and the Federal Aviation Administration (FAA) were established at the onset of the age of aviation; or the Nuclear Regulatory Commission (NRC) was established at the onset of the nuclear age. By many measures, advanced artificial intelligence has the potential to be an even more powerful technology that may impose new types of hazards on society, as exemplified by the Facebook Files. > > Given the rise of artificial intelligence, it is now time to establish a federal agency to oversee advanced artificial intelligence—an AI Control Council that is explicitly designed to address the AI Control Problem, i.e. to ensure that the ever more powerful AI systems we are creating act in society’s interest. To be effective in meeting this objective, such a council would need to have the ability to (i) pursue solutions to the direct AI control problem and (ii) to oversee and when necessary regulate the way AI is used across the U.S. economy to address the social control problem, all while ensuring that it does not handicap advances in AI. > >
9f11a63c-9d2a-4aee-a4f9-988c68aa5cee
trentmkelly/LessWrong-43k
LessWrong
Signalling with T-Shirt slogans It kind of started when I got this T-shirt as a present two years ago: It is not just a slogan that is quickly filtered out under the heading 'generic ad-like content'. It invites checking where the error is. It is kind of a challenge - at least for suitably minded persons. Exactly that kind of person I'd like to get in touch with more. This T-shirt signals: "I'm a nerd and proud of it." And the positive feedback I got from this was part of the reason I chose to signal this more. Maybe you'd like to signal this too. Please remember the T-shirt alone will not do it. You still have to talk to people. For the introverted among us (me included) I recommend active listening. The remainder of this post lists some slogans I have tried, some I will likely try shortly and other related resources. Obviously I'm not the only one using this signalling approach. T-shirt dealers have lots of these in stock. Thus some I just ordered online. But the most effective shirt I 'designed' myself. It is a black shirt suitable for business purposes and such a shirt with a small slogal on it stands out. I chose > That which can be > > destroyed by the > > truth should be. > > — P.C. Hodgell which I'm delighted to also find recommended by EY here. One year ago MIRI linked to Rational Attire but the store appears to be broken right now. And then there is Pretty Rational which at least contains some promising slogans. I also have this T-shirt about the Map-Territory correspondence: > The sentence > > "This T-shirt is black" > > is true if > > This T-shirt is black. > > -- Tarski (I had this printed on a dark blue T-shirt thus adding a gradual truth aspect). Other slogans of this type to be found on my T-shirts:   * Math is an integral part of my life. * Bene Gesserit Litany Against Fear    I plan to print more of my own design shortly. The next one will be > "Everyone generalizes > > from one example. > > At least, I do." > > -- Vlad Taltos You can find out more abo
f1e422db-4634-4e57-919f-cd6a880e312d
trentmkelly/LessWrong-43k
LessWrong
March 18th: Daily Coronavirus Links As part of the LessWrong Coronavirus Link Database, Ben, Elizabeth and I are publishing daily update posts with all the new links we are adding each day that we ranked a 3 or above in our importance rankings. Here are all the top links that we added yesterday (March 18th), by topic. Aggregators Aggregation of PubMed C19 papers Collection of every academic paper (including preprints) submitted to pubmed about coronavirus Dashboards Florida Health Department's Case Map A comprehensive and detailed dashboard on the COVID-19 statistics in Florida (Gender, Age, County), people tested, cases monitored etc Everyday Life A group house describes their precautions and plans Medical System Interactive model: when will US states run out of hospital beds This model attempts to determine at what point hospital beds will be filled by COVID-19 patients and hospitals will be above capacity. Uses hospital beds, not ICU beds or ventilators Testing labs worried about basic supply shortages Commercial labs are gearing up for a huge testing effort, but warn that they'll quickly run out of basic supplies like cotton swabs Progression/Outcome C19 can sicken children If you survey enough people (denominator unclear), you will find children with moderate and even severe symptoms Science COVID-19 may kill some through cytokine storm Accumulating evidence suggests that a subgroup of patients with severe COVID-19 might have a cytokine storm syndrome. They recommend identification and treatment of hyperinflammation using existing, approved therapies with proven safety profiles to address the immediate need to reduce the rising mortality. Biologist-targeted intro to CVs in general An overview of the coronavirus family, including physical form, pathogenicity, and epidemiology Slightly more technical biologist-targeted intro An overview of the coronavirus family, including physical form, pathogenicity, and epidemiology Spread & Prevention Twitter: Exposure dose matters Imm
751aa095-d24b-4298-84f3-bb9ef353ca99
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
Are there any AI alignment projects which governments could usefully put a very large amount of resources into? A government program could spend orders of magnitude more funds than [private philanthropy](/?state=8U2Y&question=I%E2%80%99m%20interested%20in%20providing%20significant%20financial%20support%20to%20AI%20alignment.%20How%20should%20I%20go%20about%20this%3F). This might allow a whole new class of research programs to be considered, such as projects where even [a billion dollars](https://forum.effectivealtruism.org/topics/megaprojects) is not enough. [Some](https://www.brookings.edu/blog/techtank/2015/04/14/understanding-artificial-intelligence/) [have](https://twitter.com/yudapearl/status/1641978456513867776) also suggested that a government sponsored program like the Manhattan project could be helpful for coordination, though [others](https://twitter.com/boazbaraktcs/status/1647246477385781249) question whether the situations are analogous. While many [support](https://www.cold-takes.com/how-governments-can-help-with-the-most-important-century/) the idea of government funding for AI alignment projects, there is [some debate](https://forum.effectivealtruism.org/posts/NzuJmTtRfJBjxcmnD/action-help-expand-funding-for-ai-safety-by-coordinating-on?commentId=voZ6hjdcHwF8kdEdu#comments) about whether government funding could be harmful if, for instance, it served to distort research incentives, lock in certain research directions, or attract bad actors through increasing the prestige and money available to those in the field.
95e00577-4267-4579-a83a-7766e9d168c2
trentmkelly/LessWrong-43k
LessWrong
Apply to the AI Security Bootcamp [Aug 4 - Aug 29] tl;dr We're excited to announce AI Security Bootcamp (AISB), a 4-week intensive program designed to bring researchers and engineers up to speed on security fundamentals for AI systems. The program will cover cybersecurity fundamentals (cryptography, networks), AI infrastructure security (GPUs, supply chain security), and more novel attacks on ML systems (dataset trojans, model extraction). This program will run in-person from 4th Aug to 29th Aug in London, UK. We will cover all expenses. Apply here to participate in AISB before EOD AoE, 22nd June 2025. We are also looking for instructors for parts of the program and staff to help with operations. Apply here. Summary We are running a 4-week program designed to equip AI safety researchers and engineers with critical security skills. We hope you'll leave the program with a well-practiced security mindset that will help you work on impactful projects and make AI systems more secure. The curriculum includes exercises designed to help you get hands-on experience with securing AI systems, while building and practicing the security mindset. This includes a mix of pair programming exercises, lectures, reading about public vulnerabilities, and chats with experts.  This program is aimed at people who are at ~the start of their journey into security, and have working knowledge in ML (or are willing to brush up using the MLAB curriculum before the program). We encourage you to apply if you think you'll be a good fit regardless of checking all the boxes - if going on technical deep dives and trying to understand how systems work by peeling the layers of abstraction away excites you, we'd love to hear from you. Content The content is divided into roughly three sections - introduction to security, cybersecurity for AI infrastructure, and attacks on modern ML pipelines. The exercises are designed to give you hands-on experience with cybersecurity on both the offensive and defensive sides, as well as train your security muscl
2015e741-c020-4373-9017-b8eeb80a9fce
trentmkelly/LessWrong-43k
LessWrong
Fear as fossil fuel Previous post: Extraordinary ethics require extraordinary arguments My blog entries are about a personal battle against depression and anxiety, from the point of view of someone who has been immersed in rationalist/LW ideas and culture for a few years now. In a chat with my little brother two nights ago, I told him about my "pain-strain theorem" - physical strain can be used to alleviate mental pain, and mental strain can be used to alleviate physical pain. (Proof as exercise for the reader.) I introduced the idea to tie up several narrative-of-our-lives style observations: * I worked very hard on my schoolwork in elementary school. A large part of my motivation was that I had a severe chronic illness across my whole body. Keeping myself mentally occupied offered some sweet relief from the constant pain. * Our father also has a chronic illness that causes him a lot of pain. Before he developed this illness, he was a gregarious, vivacious twentysomething - a hard worker to be sure, but also a great jokester and eager team sport player. By the time we were being reared, however, he had pulled away from all of that and started sinking all of his time into his decently mentally challenging work. * Both my brother and I are diagnosed with mental illnesses. Both my brother and I find a solid, high intensity workout to be one of the best ways to escape the vicious thinking cycles we get locked into. For me, sometimes, the unthinking that comes with being 40 minutes into an elliptical workout is akin to sleep. My chronic illness thankfully began to relent as I became a teenager; my grades began to slip. My depression deepened; I isolated myself in academia; my grades returned to their high point. I went to community college and let the maw swallow me. I had very close to a 4.0 before transferring out of my community college to an Ivy League, when everything else in my life was terrible - no money, no career prospects, few friends and newly feralized social skills. N
775339a7-8604-47f2-975a-fdcfa12bde1d
StampyAI/alignment-research-dataset/arxiv
Arxiv
An Extensible Interactive Interface for Agent Design 1 Introduction --------------- In AI we identify the correct behavior for robot systems in several ways. A popular way is through reward functions. However, reward functions make a lot of assumptions about the design setting; for example, they are only readily applicable in problems where the state space is defined in human-understandable features, or where goal states can be easily expressed in a general-purpose programming language. Approaches based on learned reward functions allow users to specify tasks in alternative, more flexible ways. For example, in Christiano et al. ([2017](#bib.bib4)) and Sadigh et al. ([2017](#bib.bib13)) the target behavior is specified through comparisons between states. Over time the reward function learns a state ranking that agrees with (and hopefully generalizes) these comparisons and is used to train a policy or optimize a trajectory. However, these methods often exhibit poor sample complexity on complex tasks, and this has led to approaches that leverage e.g. an externally generated set of expert demonstrations Ibarz et al. ([2018](#bib.bib8)). In this work, we draw inspiration from hierarchical planning systems where plans consist of a sequence of high-level actions, each of which runs a simpler, primitive, policy. Our approach combines two ideas: 1) we can use a set of primitive policies to efficiently define policies that perform complex tasks; and 2) in subsequent training rounds, we can use these complex policies as primitives themselves. Starting from an initial state, we sample a number of rollouts of fixed length from our primitive policies. We show these rollouts to a human designer who selects the best one for the target task in an interface akin to those used in Christiano et al. ([2017](#bib.bib4)) and Sadigh et al. ([2017](#bib.bib13)). The process then repeats from the final state in the selected rollout until the designer indicates the end of an episode. From this data, we can train new policies in two ways. First, from the implicit comparisons between the best rollout and the other rollouts, we learn a reward function using preference learning. This reward function can then be used to train a policy using reinforcement learning. Second, we can train goal classifiers based on states reached in the demonstrations, and train a policy using reinforcement learning to maximize goal classification probability. This provides the user with a way to specify behaviors quickly when arriving at a goal state is easy through a sub-optimal demonstration. We provide a demonstration of this approach in the Lunar Lander domain. We train policies in three stages. In the first stage, we use random behavior to learn two policies: stabilize and drop. Stabilize is trained to reach goal states where the lander is level and (close to) stationary. Drop is trained to turn off the engines for final landing. In the second stage, we use a combination of random behavior and the stabilize policy to train policies that move stably to the left or stably to the right. In the final stage, we use all four policies to train a policies that successfully lands without crashing. Our final solution is able to land successfully in over 90% of episodes. 2 Related work --------------- This work continues a recent trend of agent specification using interpretable techniques. One example is *preference learning*, where specification is based on comparisons between examples of behavior. However, in preference learning, the user has little control over exploration—that is, what kind of examples they are comparing, and therefore what kind of information they are supplying. Existing approaches for generating examples include selection based on reward uncertainty Christiano et al. ([2017](#bib.bib4)) and active synthesis of maximally-informative comparisons Sadigh et al. ([2017](#bib.bib13)). In contrast, our approach allows the user to influence exploration directly through demonstration. Another example of this trend is *natural language instruction*. However, natural language must be grounded in the environment. This can be difficult; possibilities include demonstration of instructions Co-Reyes et al. ([2018](#bib.bib5)), examples of goal states corresponding to instructions Bahdanau et al. ([2018](#bib.bib2)), and rewards manually conditioned on instructions Hermann et al. ([2017](#bib.bib7)). We approach the problem from the opposite direction: instead of taking an existing vocabulary and grounding it in the environment, we give the user the means to define a vocabulary of behaviors for themselves. 3 Method --------- Training is based on demonstrations using a set of behaviors defined by the user. These behaviors, which we term *behavioral primitives*, are encoded as policies. The training process is iterative: experience generated by one set of demonstrations is used to define primitives for the next set of demonstrations, continuing until a primitive that can perform the full task. The user defines the first set of primitives by identifying interesting goal states in experience generated by a random policy then training policies to reach each of those states. Thereafter the user defines primitives either based either on goal states in experience generated by previous demonstrations, or based directly on behaviors demonstrated. The training system therefore consists of three components: * An interface for defining goal states based on experience generated during training so far. * Apparatus for training behavioral primitives. * An interface for giving demonstrations using those behavioral primitives. Each of these components is described in detail below. ### 3.1 Defining goal states One way to define a behavioral primitive is by training a goal-conditioned policy based on a goal defined by the user. In contrast to previous work in goal-conditioned RL Schaul et al. ([2015](#bib.bib14)); Andrychowicz et al. ([2017](#bib.bib1)); Nair et al. ([2018](#bib.bib9)), we consider goal states that are abstract concepts, rather than specific environment states. Instead of referring to, say, a particular position in the room, one of our goal states might be ‘near a human’, yielding a behavioral primitive that moves to a human. This enlarges the set of possible behaviors that can be encoded as goal-conditioned policies. The user defines these abstract goal states by example. These examples are used to train a binary classifier (which we refer to as a *discriminator*) to recognize whether the agent is in the defined state. The user begins by browsing videos of previous episodes generated during the training process. Initially, these episodes are generated by a random policy, but later in training these include episodes generated by demonstrations. Once the user has identified an ‘interesting’ state (a behavior she believes will be useful for later demonstrations), she labels video frames as positive or negative examples of that state. Frames are mapped to environment observations (which may be lower-dimensional than the video frames) by the system, creating a set of positive and negative examples of observations. These examples are then used to train a binary classifier, a *discriminator*, implemented using a neural network, to recognise the abstract state. In our experiments below, roughly 400 examples are required to train a robust discriminator for each state. ### 3.2 Behavioral primitives from goal states ![Demonstrations interface. Demonstrations are given one action at a time. Each action corresponds to some temporally-extended behavior generated by running the corresponding primitive policy for a fixed number of timesteps (here, three timesteps). Because the results of each primitive may be unpredictable, the user chooses between actions based on the simulated rollouts that would result (here, the filled circle moving to the position indicated by the unfilled circle). Once the user chooses an action, the final state from the corresponding rollout is used as the first state of the next set of rollouts.](https://media.arxiv-vanity.com/render-output/6614041/x2.png) Figure 2: Demonstrations interface. Demonstrations are given one action at a time. Each action corresponds to some temporally-extended behavior generated by running the corresponding primitive policy for a fixed number of timesteps (here, three timesteps). Because the results of each primitive may be unpredictable, the user chooses between actions based on the simulated rollouts that would result (here, the filled circle moving to the position indicated by the unfilled circle). Once the user chooses an action, the final state from the corresponding rollout is used as the first state of the next set of rollouts. To define primitive policies based on user-defined goal states, we use discriminator probability output as a reward signal then train using an off-the-shelf reinforcement learning algorithm, Proximal Policy Optimization Schulman et al. ([2017](#bib.bib15)) from OpenAI Baselines Dhariwal et al. ([2017](#bib.bib6)). To maximise reward, the resulting policy must activate the discriminator as strongly as possible - in other words, it must move to and stay in the goal state. ### 3.3 Demonstrations using behavioral primitives The user gives demonstrations by using primitive policies as temporally-extended actions. Based on the current environment state, the user chooses a policy; the policy is run for a fixed number of timesteps; based on the new state, the user chooses a new policy; and so on. Primitive policies are thus similar to *options* in the options framework Sutton et al. ([1999](#bib.bib16)), with a termination condition based on the number of steps. When the effect of each primitive is predictable, the choice of primitive at each step is straightforward. In general, however, we assume primitives will be somewhat unpredictable. This is partly because we do not expect trained primitives to be perfect (e.g. movement may be erratic). However, the environment itself may also be unpredictable. In the Lunar Lander game described below, for example, it is difficult to predict how the spacecraft will move in low gravity. To enable an informed choice, at each demonstration step we show the user not only the current environment state but also a video of the rollout that would result from running each policy from that state. Essentially, the user chooses by examining the short-term futures that would result from each action. This interface is illustrated in Figure [2](#S3.F2 "Figure 2 ‣ 3.2 Behavioral primitives from goal states ‣ 3 Method"). ### 3.4 Behavioral primitives from demonstrations In addition to defining primitives from goal states, we also support defining behavioral primitives directly from behavior demonstrated by the user. Our early experiments used a simple imitation learning technique, behavioral cloning Pomerleau ([1991](#bib.bib10)). However, the resulting policies would often perform significantly worse than the demonstrations themselves. Instead, we note that our demonstrations offer an additional source of information. Each demonstrated action yields not only information about optimal behavior (the rollout from the selected primitive) but also *comparisons* to sub-optimal behaviors from the same state (the rollouts from the other primitives). These comparisons can be used to train a policy using *preference learning* techniques. In particular, we implement preference learning based on Christiano et al. ([2017](#bib.bib4)). This involves training a neural network to predict the result of each pairwise comparison (between the chosen rollout and one of the other rollouts, predicting which was the chosen rollout). The prediction is made using a latent reward value calculated for each frame in each rollout. This predicted reward function is then used to train a policy using a reinforcement learning algorithm. Again, we use Proximal Policy Optimization Schulman et al. ([2017](#bib.bib15)). (We also investigating combining behavioral cloning with preference learning, but we found this to result in worse performance than preference learning alone. Future work will investigate other ways of making use of both types of information.) ### 3.5 Iterated training For simple tasks, it may be possible to give demonstrations of the full task using only the first set of primitives defined, and from those demonstrations train a final behavioral primitive that successfully performs the task. In general, however, we assume that initial primitives will only enable demonstration of some part of the task. The user has a number of options for defining new primitives based on previous primitives. Demonstration of new behavior. The user might directly demonstrate a behavior she intends to use in later demonstrations, distilling those demonstrations into a new primitive policy as described above. For example, using a set of basic quadcopter movement primitives the user might demonstrate a loop-the-loop behavior and use this as a new primitive. Goal states from demonstrations. Demonstrations generate experience exploring parts of the state space not covered by the initial set of random behavior. From this experience new goal states can be defined from which new primitives can be trained. Returning to the quadcopter example, the user could demonstrate moving the quadcopter over a charging platform, and using that goal state, train a policy to move to the charging platform. Practically, this involves saving demonstrated episodes for browsing and labelling with the same interface as the initial set of goal states. Curriculum learning with goal states. In some cases it may not be necessary to *demonstrate* exploration of new parts of the state space; interesting new states may be reached by simply running one of the existing primitive policies in the environment (assuming that the policy is stochastic and therefore does some exploration of its own). Consider training a robot to navigate a maze. Random exploration from the initial state may only wander around in the first part of the maze, so that the best goal state initially possible to define is only a short distance along the correct path. But by exploring randomly in the vicinity of this first state, it is easier to wander into a deeper part of the maze in which a second goal state may be defined, and so on. Essentially, we can train using a *curriculum* of gradually more advanced goals. Running a single policy in the environment can be seen as a special type of demonstration where only one action is available. This full training process is shown in [Figure 1](#S0.F1 "Figure 1"). 4 Case study: Lunar Lander --------------------------- ![Lunar Lander gameplay. The user must guide the descent of the spacecraft, landing in the area designated by the two flags.](https://media.arxiv-vanity.com/render-output/6614041/figures/lunarlander.png) Figure 3: Lunar Lander gameplay. The user must guide the descent of the spacecraft, landing in the area designated by the two flags. As a concrete example, we use this system to train an agent to play a video game. Lunar Lander is a simple game included with OpenAI Gym Brockman et al. ([2016](#bib.bib3)) in which a user must control a 2D spacecraft, landing gently in a designated landing zone on the lunar surface ([Figure 3](#S4.F3 "Figure 3 ‣ 4 Case study: Lunar Lander")). With a (discrete) action space of ‘rotate spacecraft left’, ‘rotate spacecraft right’ and ‘fire thrusters at the bottom of the spacecraft’, the spacecraft is very hard to control; it is challenging to train a robust policy through simple imitation learning because it is difficult to give good demonstrations. Previous work with Lunar Lander has, for example, attempted to make control easier by assisting the user in the original action space Reddy et al. ([2018](#bib.bib11)). In contrast, we enable the user to define a *new* control space which is easier to use in the first place. ### 4.1 Goal states and behavioral primitives ![Primitives defined for Lunar Lander. Starting from random behavior, we use goal states to define ‘stabilize descent’ and ‘drop with engines off’ primitives. We then define further goal states in experience generated by the ‘stabilize descent’ primitive to define ‘fly stably left’ and ‘fly stably right’ primitives.](https://media.arxiv-vanity.com/render-output/6614041/x3.png) Figure 4: Primitives defined for Lunar Lander. Starting from random behavior, we use goal states to define ‘stabilize descent’ and ‘drop with engines off’ primitives. We then define further goal states in experience generated by the ‘stabilize descent’ primitive to define ‘fly stably left’ and ‘fly stably right’ primitives. Iteration 1: starting from random behavior, we define a stabilize descent goal state and primitive. Each episode begins with the spacecraft falling towards the surface at a random speed and angle; this primitive slows down the spacecraft and returns its angle to neutral. We train this primitive based on examples of the spacecraft being level and having a low velocity (angle and velocity are both included in the observation space). Iteration 2: from experience generated by running the ‘stabilize descent’ policy, we define stably fly left and stably fly right primitives. One major difficulty when demonstrating with original controls is the need to rotate in order to move left or right. It is easy to rotate too much and become unstable. These primitives move the spacecraft slowly left or right without allowing the angle to deviate too much from neutral. We also train these primitives using goal states. We generate experience using the ‘stabilize descent’ primitive, which is not perfect and sometimes drifts slightly in one direction. We collect examples of this drifting and use those examples to train goal state discriminators and corresponding policies. Iteration 3: we define a drop primitive, completely shutting off the spacecraft’s engines so that when the craft is sufficiently close to the lunar surface we can actually land. This primitive is defined based on instances from the initial set of random behavior in which the engine is not firing. An illustration of these primitives is shown in [Figure 4](#S4.F4 "Figure 4 ‣ 4.1 Goal states and behavioral primitives ‣ 4 Case study: Lunar Lander"). Iteration 4: finally, using these four primitives—‘stabilize descent’, ‘fly stably left’, ‘fly stably right’ and ‘drop’—we train a policy to actually play the game by demonstrating successful landings and training a policy from those demonstrations. Defining this set of four demonstration primitives takes roughly one hour. Though the primitives are not perfect, they are sufficient to land the spacecraft in the landing zone in 80% of demonstrations. ### 4.2 Results ![Lunar Lander training results—success rate against number of RL steps in the environment. Using our approach, we are able to train a policy which lands successfully in almost all episodes. A comparable preference learning baseline, Deep Reinforcement Learning from Human Preferences ](https://media.arxiv-vanity.com/render-output/6614041/figures/lunar_lander_successful_landing_rate_by_step.png) Figure 5: Lunar Lander training results—success rate against number of RL steps in the environment. Using our approach, we are able to train a policy which lands successfully in almost all episodes. A comparable preference learning baseline, Deep Reinforcement Learning from Human Preferences Christiano et al. ([2017](#bib.bib4)), succeeds in only one out of three runs. A baseline using imitation learning instead of reinforcement learning, DAgger Ross et al. ([2011](#bib.bib12)), is more robust but does not achieve high performance. (Successful landing rate is calculated using a moving window of ten episodes. Shaded regions indicate one standard deviation across three runs, each with a different random seed. Full training curve for DAgger not shown due to dissimilarity in training method.) Starting from the four demonstration primitives described above, we perform three training runs, each with a different random initialization. In all runs, we found that only eight demonstrated episodes are required to train a successful policy. This requires roughly 15 minutes of human interaction time, followed by 90 minutes of training time (reinforcement learning using the reward function inferred from demonstrations). The resulting policy is capable of landing successfully in over 90% of episodes (see [Figure 5](#S4.F5 "Figure 5 ‣ 4.2 Results ‣ 4 Case study: Lunar Lander")). We compare these results against two baselines, matching 15 minutes of human interaction time in each case. First, we compare to preference learning from randomly-selected examples of behavior generated while training, as in Christiano et al. ([2017](#bib.bib4)). Here, training is significantly less robust; we could only train a successful policy in one out of three runs. In the other two runs, the policy did not produce the kind of examples that would enable the user to give informative preferences, resulting in the policy only learning to hover in mid-air rather than to land. Second, we compare to simple imitation learning using Dataset Aggregration (DAgger) Ross et al. ([2011](#bib.bib12)). Here the trained policy does learn to land, but often does so by crash-landing, and often misses the target area. 5 Conclusions and discussion ----------------------------- We have presented a proposal for an interactive training interface that combines several learning methods to enable a non-technical user to build a policy from scratch. We use this interface to build a policy that plys the Lunar Lander game, and find that our method outperforms two comparable baseline methods in robustness and policy performance. ### 5.1 Future work Incorporating imitation learning. We were surprised to find that combining behavioral cloning with preference learning resulted in worse performance than preference learning alone. We would like to explore alternative methods of combining the two—for example, using behavioral cloning to pre-train, rather than as part of a combined loss. Variable-timescale primitives. In this work, all primitives ran for the same number of timesteps. However, part of the power of an iterated training process is that demonstrations can take place at increasingly abstract levels as training progresses. To enable this, primitive policies would need to run for more steps. There should be a clear criterion for how many steps are necessary for each policy. One way to achieve this might be through flexible termination conditions, as used in the options framework Sutton et al. ([1999](#bib.bib16)). ### 5.2 Acknowledgements We thank Rohin Shah and Adam Gleave for helpful comments and suggestions on an earlier version of this manuscript. This work was supported by the Center for Human-Compatible AI.
3d97f923-eb1b-437b-8f77-da4ef7b8cd3e
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
200 COP in MI: Interpreting Reinforcement Learning *This is the ninth post in a sequence called 200 Concrete Open Problems in Mechanistic Interpretability.*[*Start here*](https://www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability)*, then read in any order. If you want to learn the basics before you think about open problems, check out*[*my post on getting started*](https://neelnanda.io/getting-started)*. Look up jargon in*[*my Mechanistic Interpretability Explainer*](https://neelnanda.io/glossary) ***Motivating papers**:*[*Acquisition of Chess Knowledge in AlphaZero*](https://arxiv.org/abs/2111.09259)*,* [*Understanding RL Vision*](https://distill.pub/2020/understanding-rl-vision/) ***Disclaimer**: My area of expertise is language model interpretability, not reinforcement learning. These are fairly uninformed takes and I’m sure I’m missing a lot of context and relevant prior work, but this is hopefully more useful than nothing!* Motivation ---------- Reinforcement learning is the study of how to create agents - models that can act in an environment and form strategies to get high reward. I think there are a lot of deep confusions we have about how RL systems work and how they learn, and that trying to reverse engineer these systems could teach us a lot! Some high-level questions about RL that I’d love to get clarity on: * How much do RL agents learn to **actually model their environment** vs just being a big bundle of learned heuristics? + In particular, how strong a difference is there between model-free and model-based systems here? * Do models explicitly **reason about their future actions**? What would a circuit for this even look like? * What are the **training dynamics** of an RL system like? What does its path to an eventual solution look like? A core hard part about training RL systems is that the best strategies tend to require multi-step plans, but taking the first step only makes sense if you know that you’ll do the next few steps correctly. + What does it look like on a circuit level when a model successfully explores and finds a new and better solution? * **How much do the** **lessons from mechanistic interpretability transfer**? Can we even reverse engineer RL systems at all? + How well do our conceptual frameworks for thinking about networks transfer to RL? Are there any things that clearly break, or new confusions that need to be grappled with? + I expect training dynamics to be a particularly thorny issue - rather than learning a fixed data distribution, whether an action is a good idea depends on which actions the *model*will take in future. Further, I think that it’s fairly likely that, whatever AGI looks like, it involves a significant component of RL. And that training human-level and beyond systems to find creative solutions to optimise some reward is particularly likely to result in dangerous behaviour. Eg, where models learn the instrumental goals to seek power or to deceive us, or just more generally learn to represent and pursue sophisticated long term goals and to successfully plan towards these.  But as is, much alignment work here is fairly speculative and high level, or focused on analysing model behaviour and forming inferences. I would *love*to see this grounded in a richer understanding of what is *actually*going on inside the system. Some more specific questions around alignment that I’m keen to get clarity on: * Do models actually learn **internal representations of goals**? Can we identify these? + I’m particularly interested in settings with [robust capabilities but misgeneralised goals](https://arxiv.org/pdf/2210.01790.pdf) - what’s going on here? Can we find any evidence for or against [inner optimisation concerns](https://arxiv.org/abs/1906.01820)? * How much are models **actually forming plans internally**? Can we identify this? * Do models ever **simulate other agents**? (Especially in a multi-agent setting) A work I find particularly inspiring here is Tom McGrath et al’s work, [Acquisition of Chess Knowledge in AlphaZero](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9704706/). They used linear probes to look for which chess concepts were represented within the network (and had the fun addition of having a former world chess champion as co-author, who gave commentary on the model’s opening play!). And they found that, despite AlphaZero being trained with no knowledge of human chess playing, it had re-derived many human chess concepts. Further, by applying their probes during training, they found that many of these concepts arose in a sudden phase transition, and were able to contrast the order of concepts learned to the history of *human*chess playing. I think this is extremely cool work, and I’d love to see extensions that try to really push on the angle of reverse-engineering and learning from a superhuman system, or on trying to reverse-engineer *how*some of these concepts are computed. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673372267/mirroredImages/eqvvDM25MXLGqumnf/u2lzggjvyixnudvjuldi.png) Overall, I think it’s pretty obvious that making progress reverse-engineering RL would be important! But is it tractable? I find it hard to say - I just haven’t seen that much work on it! I personally expect us to be able to make *some*progress in reverse-engineering RL, but for it to be even harder than reverse-engineering normal networks. Deep RL systems are, fundamentally, neural networks, and the same kind of techniques should work on them. But there are also a bunch of weirder things about RL, and I don’t know what roadblocks will appear.  In my opinion, the lowest hanging fruit here is to just really grapple with reverse engineering even a simple system trained with RL, and to see how deep an understanding we can get. Tips ---- * RL is complicated! To start out, I’d aim to cut down the complexity as much as possible + **Aim for simpler problems first**: - Try to have discrete action and state spaces. This is simpler, easier to reason about, easier to construct counter-factual inputs for, etc. - Caveat - this *may*not work, and some simple problems (eg Cartpole) may be too toy to be interesting - Problems where you only need to know the current state rather than the full history (Markov Decision Processes aka MDPs) - Try to think through what kind of algorithms the final model might learn, and how interesting/interpretable this might be. If it’s just going to eg memorise a fixed series of steps, then it’s probably *too*simple to be interesting + Try to train and interpret **the smallest model that works** (this may be significantly smaller than the paper’s!) - Speculatively, this may be easiest for a small transformer - I have a fuzzy intuition that the sequence based structure of transformers tend to more naturally represent discrete logical structure than conv nets, and attention makes it easier to observe the flow of information through the network. I further speculate that discrete input spaces makes things cleaner and easier to interpret. - This is a weakly held take, it’s certainly possiblethat smaller models are harder because they eg have more superposition and more confusing abstractions! Generally, small language models were easier and small image models were harder (than larger models). * A good starting point to interpret a model is **finding some counterfactual** - two inputs that are mostly similar, but where the model takes importantly different actions. This can let you control for lots of irrelevant model behaviour, and isolate out the circuit that matters, using techniques like [activation patching](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=qeWBvs-R-taFfcCq-S_hgMqx) * I would mentally divide a network into 3 parts - **sensory neurons -> analysis -> motor neurons** + The “sensory neurons” take in the raw data and process it (eg mapping the pixels of a Pong game to features like the position and velocity of the ball) + Logic to actually analyse these features, potentially including models of itself or the world, to figure out what to do + “Motor neuron” to then convert these concepts to actual actions + This is a useful distinction to have in mind, because conceptually these are different circuits of the network - Ideally, they would also be different layers of the network, but things are often messy - eg *some*of the sensory stuff is in layer 1, the rest is in layer 2, but *some*of layer 2 is doing conceptual stuff with the sensory stuff that’s already there + Note: This framing applies to models in general, not just RL! * **Training dynamics based projects** + If training your own system, make sure to have code that can be re-run deterministically (ie, explicitly setting a random seed), and to take many checkpoints + Focus on understanding the final, fully trained model, before studying it during training. You don’t need to *fully*reverse-engineer it, but it’s very useful to be confident you understand the parts of the behaviour you care about, and ideally to have distilled these to key patterns or metrics, before you try to extrapolate this to the system during training - Further, I’m much more comfortable using somewhat janky/unreliable but automated methods if they agree with the results of more rigorous methods on the final model. + See all the tips in [the training dynamics post](https://www.alignmentforum.org/posts/hHaXzJQi6SKkeXzbg/200-cop-in-mi-analysing-training-dynamics)! * **Expect interpreting RL to be hard**! In particular, I expect there to be a lot of rabbit holes and weird anomalies. If you want to make research progress, it’s important to prioritise, and to keep a clear picture in mind re what is actually *interesting*about your exploration. + Though random exploration can also be useful, especially when starting out or just messing around! It’s important to balance exploring and exploiting. I try to give myself a bounded period of time to be nerd-sniped, and then to check in and re-prioritise * Plausibly, focusing on RL problems is a bad starter project if you’re new to ML/mech interp. I expect there to be a bunch of additional complexities, and there’s less prior work that you can easily build upon. The difficulty ratings here are pretty rough and somewhat scaled to the section - **I expect a B problem here to be harder than a B elsewhere**! + But if you feel excited about RL specifically, don’t let me discourage you! * There’s a lot of RL based algorithms in use, but my impression is that [policy gradient based algorithms](https://lilianweng.github.io/posts/2018-02-19-rl-overview/#policy-gradient), especially [Proximal Policy Optimization (PPO)](https://spinningup.openai.com/en/latest/algorithms/ppo.html#), are the main ones used on modern language models, and are what I’m personally most interested in understanding. Resources --------- * The main two papers I’ve seen trying to reverse engineer RL systems are: + [Understanding RL vision](https://distill.pub/2020/understanding-rl-vision/) which reverse engineers a convnet trained on CoinRun (ie it directly sees the pixels of a procedurally generated Mario-style game) + [Acquisition of Chess Knowledge in AlphaZero](https://arxiv.org/abs/2111.09259) - There’s some [accompanying visualizations](https://storage.googleapis.com/uncertainty-over-space/alphachess/index.html?block=9&factor=4) - [A good tutorial on the AlphaZero algorithm](https://web.stanford.edu/~surag/posts/alphazero.html) + (I am not familiar with this field, and I know the lead authors of both these papers. This is definitely *not*a comprehensive literature review!) * Learning RL: + [Lilian Weng’s post on RL](https://lilianweng.github.io/posts/2018-02-19-rl-overview/), a good overview + [Spinning Up in Deep RL](https://spinningup.openai.com/en/latest/), a more in-depth tutorial + [David Silver’s intro to RL course](https://www.deepmind.com/learning-resources/introduction-to-reinforcement-learning-with-david-silver) (creator of AlphaGo and AlphaZero) * My [TransformerLens library](https://github.com/neelnanda-io/TransformerLens) + Out of the box, this works for GPT-style language models, and will take some effort to adapt to another architecture. + But my guess is that the design principles used will transfer well to other architectures (eg ConvNets, MLPs, etc), and the HookPoint functionality will be a good interface for doing mech interp work (caching and editing arbitrary activations). Problems -------- [*This spreadsheet*](https://www.neelnanda.io/cop-spreadsheet) *lists each problem in the sequence. You can write down your contact details if you're working on any of them and want collaborators, see any existing work or reach out to other people on there! (thanks to Jay Bailey for making it)* * **B-C\* 8.1 -** Replicate some of [Tom McGrath’s AlphaZero work](https://arxiv.org/abs/2111.09259) with [LeelaChessZero](https://lczero.org/). I’d start by using NMF on the activations and trying to interpret some of these. See visualisations from the paper [here](https://storage.googleapis.com/uncertainty-over-space/alphachess/index.html). + **C-D\* 8.2 -** Try applying this to an open source AlphaZero style Go playing agent, and see if you can get any traction. + **C-D\* 8.3 -** Train a small AlphaZero model on a simple game (eg Tic-Tac-Toe), and see if you can apply this work there.  (Warning: I expect even just training the models here will be hard! Check out [this tutorial on AlphaZero](https://web.stanford.edu/~surag/posts/alphazero.html)) + **D\* 8.4 -** Can you extend the work on LeelaZero? In particular, can you find anything about *how* a feature is computed? I'd start by looking for features near the start or end of the network. * **B-C\*** **8.5 -** Interpret one of the examples in the goal misgeneralisation papers ([Langosco et al](https://arxiv.org/abs/2105.14111) & [Shah et al](https://arxiv.org/pdf/2210.01790.pdf)) - can you concretely figure out what’s going on? + **B\* 8.6 -** Possible starting point: The Tree Gridworld and Monster Gridworld from [Shah et al](https://arxiv.org/pdf/2210.01790.pdf) are both tiny networks, can you interpret those? + **C\* 8.7 -** CoinRun is another promising place to start. [Interpreting RL Vision](https://distill.pub/2020/understanding-rl-vision/) made significant progress interpreting an agent trained to play it, and Langosco et al found that it was an example of goal misgeneralisation - can you build on these techniques to predict the misgeneralisation - [CoinRun](https://openai.com/blog/quantifying-generalization-in-reinforcement-learning/#:~:text=CoinRun%20was%20designed%20to%20be,quantifiable%20supply%20of%20training%20data.) is a procedurally generated Mario-style platformer, where the agent learns to get a coin at the end of the level. The coin is always in the same position during training, but Langosco et al moved it to a random position. The model could either get confused, go to the new coin or go to the old position, and they found that it still goes to the original position but otherwise plays the game correctly. * **B-C\* 8.8 -** Can you apply transformer circuits techniques to a [decision transformer](https://arxiv.org/abs/2106.01345)? What do you find? (Check out [their codebase](https://github.com/kzl/decision-transformer) and [HuggingFace’s implementation](https://huggingface.co/docs/transformers/model_doc/decision_transformer)) + **B\*** Try training a 1L transformer on a toy problem, like finding the shortest path in a graph, as described in the introduction + I recommend training a model on the easiest tasks given in the paper, but with 1L or 2L, and seeing if it can get decent performance (the ones in the paper are only 3L!) * **B-C\* 8.9 -** Train and interpret a model from the [In-Context Reinforcement Learning and Algorithmic Distillation](https://arxiv.org/pdf/2210.14215.pdf) paper. They trained small transformers where they input a sequence of moves for a “novel” RL task and the model output sensible answers *for that task* + The paper uses 4L transformers! I bet you can get away with smaller. + See [Sam Marks’](https://www.lesswrong.com/posts/avvXAvGhhGgkJDDso/caution-when-interpreting-deepmind-s-in-context-rl-paper) arguments for ways the paper can be overhyped or misunderstood. * Interpreting transformers trained with [Reinforcement Learning from Human Feedback](https://openai.com/blog/learning-to-summarize-with-human-feedback/) + **D\* 8.10 -** Go and interpret CarperAI’s RLHF model (forthcoming). What’s up with that? How is it different from a vanilla language model? I expect the ideas in the concrete circuits in LLMs section will apply well here. - C 8.11 - Can you find any circuits corresponding to longer term planning? - **C\* 8.12 -** Can you get any traction on interpreting its reward model? I’m not aware of much work here and expect there’s a lot of low hanging fruit - even just entering in text and qualitatively exploring its behaviour is a good place to start. + **D\* 8.13 -** Train a toy RLHF model, eg a one or two layer model, to do a simple task. Getting human data is expensive, so you can use a larger language model (GPT-2 XL or GPT-J is probably good enough, GPT-3 is definitely enough). Then try to interpret it. - Note - RL is hard and language model training is hard, so be prepared for training it *yourself* to be a massive slog, let alone interpreting it. But I would be *super*excited to see these results! - Do the above but for larger models, which will be necessary to train it for more interesting tasks. eg GPT-2 Medium or GPT-2 XL or GPT-J - bigger models will be much more of a pain to train but more interesting to interpret * C 8.14 - Try training and interpreting a small model from [Guez et al](https://arxiv.org/pdf/1901.03559.pdf). They trained agents with model-free RL, and showed some evidence that they’d spontaneously learned planning. Can you find any evidence for or against this? * Can you interpret a small model trained with [policy gradients](https://lilianweng.github.io/posts/2018-02-19-rl-overview/#policy-gradient) on: + Tips: - I recommend picking a task with *some*non-trivial reasoning required - eg Cartpole is probably too trivial to be interesting - Any of a transformer, MLP and ConvNet feel like reasonable + B 8.15 - A gridworld task + **B-C\* 8.16 -** An [OpenAI gym](https://github.com/openai/gym) task + **C\* 8.17 -** An Atari game, eg Pong + B-D 8.18 - Try any of the above, training with Q-Learning instead. * **B-D\* 8.19 -** On any of the above tasks, take an agent trained with RL, and train another network to copy the output logits of that agent. Try to reverse engineer that cloned model. Can you find the resulting circuits in the original model? + My *guess*is that the underlying circuits will be similar, but that the training dynamics of RL will leave lots of “vestigial organs”, that were useful in early training but now obsolete. And that the cloned system won’t have these, and so may be cleaner. + You can try making the cloned system smaller, which may make it easier to interpret! * **B-D\* 8.20 -** On any of the above tasks, once you’ve got some traction understanding the fully trained agent, try to extend this understanding to study it during training. Can you get any insight into what’s actually going on? (See the relevant thoughts in the tips section) + Even just making some janky metrics for the eventual behaviour/circuit and plotting it over training would be interesting! + A good place to start would be to pick an intermediate checkpoint and to just try re-running all of your analysis on it and see what is consistent and what breaks. + I’m particularly interested in the interplay between the policy and value networks in [policy gradient algorithms](https://lilianweng.github.io/posts/2018-02-19-rl-overview/#policy-gradient). * **A-D\* 8.21 -** Choose your own adventure! There’s a lot of work and papers in RL, pointing in a lot of interesting directions. Pick one you’re excited about, and see if you can make any progress reverse-engineering an agent studied there.
d7921b87-e5c0-4069-858d-d8828f904e8c
StampyAI/alignment-research-dataset/blogs
Blogs
End-of-the-year fundraiser and grant successes Our **[winter fundraising drive](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/)** has concluded. Thank you all for your support! Through the month of December, 175 distinct donors gave a total of $351,298. Between this fundraiser and our summer fundraiser, which brought in $630k, we’ve seen a surge in our donor base; our previous fundraisers over the past five years had brought in on average $250k (in the winter) and $340k (in the summer). We additionally received about $170k in 2015 grants from the Future of Life Institute, and $150k in other donations. In all, we’ve taken in about $1.3M in grants and contributions in 2015, up from our $1M average over the previous five years. As a result, we’re entering 2016 with a team of six full-time researchers and over a year of runway. Our next big push will be to close the gap between our new budget and our annual revenue. In order to sustain our current growth plans — which are aimed at expanding to a team of approximately ten full-time researchers — we’ll need to begin consistently taking in close to $2M per year by mid-2017. I believe this is an achievable goal, though it will take some work. It will be even more valuable if we can *overshoot* this goal and begin extending our runway and further expanding our research program. On the whole, I’m very excited to see what this new year brings. --- In addition to our fundraiser successes, we’ve begun seeing new **grant-winning success**. In collaboration with Stuart Russell at UC Berkeley, we’ve won a $75,000 grant from the Berkeley [Center for Long-Term Cybersecurity](http://www.ischool.berkeley.edu/cltc). The bulk of the grant will go to funding a new postdoctoral position at UC Berkeley under Stuart Russell. The postdoc will collaborate with Russell and MIRI Research Fellow Patrick LaVictoire on the problem of AI [corrigibility](https://intelligence.org/2014/10/18/new-report-corrigibility/), as described in the [grant proposal](https://intelligence.org/files/CorrigibilityAISystems.pdf): > Consider a system capable of building accurate models of itself and its human operators. If the system is constructed to pursue some set of goals that its operators later realize will lead to undesirable behavior, then the system will by default have incentives to deceive, manipulate, or resist its operators to prevent them from altering its current goals (as that would interfere with its ability to achieve its current goals). […] > > > We refer to agents that have no incentives to manipulate, resist, or deceive their operators as “corrigible agents,” using the term as defined by [Soares et al.](https://intelligence.org/2014/10/18/new-report-corrigibility/) (2015). We propose to study different methods for designing agents that are in fact corrigible. > > This postdoctoral position has not yet been filled. Expressions of interest can be emailed to [alex@intelligence.org](mailto:alex@intelligence.org) using the subject line “UC Berkeley expression of interest.” The post [End-of-the-year fundraiser and grant successes](https://intelligence.org/2016/01/12/end-of-the-year-fundraiser-and-grant-successes/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
fa2f910c-dca0-47fc-abab-731009530398
trentmkelly/LessWrong-43k
LessWrong
No Universally Compelling Arguments in Math or Science Last week, I started a thread on the widespread sentiment that people don't understand the metaethics sequence. One of the things that surprised me most in the thread was this exchange: Commenter: "I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this." Me: "But you realize that Eliezer is arguing that there aren't universally compelling arguments in any domain, including mathematics or science? So if that doesn't threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?" Commenter: "Waah? Of course there are universally compelling arguments in math and science." Now, I realize this is just one commenter. But the most-upvoted comment in the thread also perceived "no universally compelling arguments" as a major source of confusion, suggesting that it was perceived as conflicting with morality not being arbitrary. And today, someone mentioned having "no universally compelling arguments" cited at them as a decisive refutation of moral realism. After the exchange quoted above, I went back and read the original No Universally Compelling Arguments post, and realized that while it had been obvious to me when I read it that Eliezer meant it to apply to everything, math and science included, it was rather short on concrete examples, perhaps in violation of Eliezer's own advice. The concrete examples can be found in the sequences, though... just not in that particular post. First, I recommend reading The Design Space of Minds-In-General if you haven't already. TLDR; the space of minds in general ginormous and includes some downright weird minds. The space of human minds is a teeny tiny dot in the larger space (in case this isn't clear, the diagram in that post isn't remotely drawn to scale). Now with that out of the way... There are minds in the space of minds-in-general that do not recognize modus ponens.
59f2f48f-4e2b-4b67-ad5c-9b81f7bd62bf
trentmkelly/LessWrong-43k
LessWrong
Summaries of top forum posts (1st to 7th May 2023) This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed. If you'd like to receive these summaries via email, you can subscribe here. Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!   Philosophy and Methodologies Who regulates the regulators? We need to go beyond the review-and-approval paradigm by jasoncrawford Linkpost for this blog post. Institutional Review Boards (IRBs) were put in place to review the ethics of medical trials, and initially worked well. However, after a study participant's death, they became more stringent and over-reached (eg. requiring heart attack study participants to read and sign long consent forms during a heart attack). A similar pattern occurred with the FDA, NEPA and NRC. This is due to lopsided incentives - regulators are blamed for anything that goes wrong, but neither blamed nor rewarded for how much they slow down or speed up progress. It’s also harder to remove regulations than  to add them. The same pattern can be seen as corporations grow eg. Google is now very risk-averse and can require 15+ approvals for minor changes. The author believes this is evidence the review-and-approval model is broken, and we need better ways to mitigate risk and create safety (eg. liability laws).   How much do you believe your results? by Eric Neyman The performance of an intervention in a trial / study is a combination of its actual effect and random noise. This means when comparing multiple interventions, you should expect the top-performing ones to be a combination of good and lucky, and therefore discount for the luck portion (eg. if it estimates 4 lives per $X, you might expect 2). The author suggests keeping this in mind when considering a study, a
26253767-7563-4384-b30e-b387e8c0af6e
trentmkelly/LessWrong-43k
LessWrong
You're Entitled to Arguments, But Not (That Particular) Proof Followup to:  Logical Rudeness > "Modern man is so committed to empirical knowledge, that he sets the standard for evidence higher than either side in his disputes can attain, thus suffering his disputes to be settled by philosophical arguments as to which party must be crushed under the burden of proof." >         -- Alan Crowe There's a story - in accordance with Poe's Law, I have no idea whether it's a joke or it actually happened - about a creationist who was trying to claim a "gap" in the fossil record, two species without an intermediate fossil having been discovered.  When an intermediate species was discovered, the creationist responded, "Aha!  Now there are two gaps." Since I'm not a professional evolutionary biologist, I couldn't begin to rattle off all the ways that we know evolution is true; true facts tend to leave traces of themselves behind, and evolution is the hugest fact in all of biology.  My specialty is the cognitive sciences, so I can tell you of my own knowledge that the human brain looks just like we'd expect it to look if it had evolved, and not at all like you'd think it would look if it'd been intelligently designed.  And I'm not really going to say much more on that subject.  As I once said to someone who questioned whether humans were really related to apes:  "That question might have made sense when Darwin first came up with the hypothesis, but this is the twenty-first century.  We can read the genes.  Human beings and chimpanzees have 95% shared genetic material.  It's over." Well, it's over, unless you're crazy like a human (ironically, more evidence that the human brain was fashioned by a sloppy and alien god).  If you're crazy like a human, you will engage in motivated cognition; and instead of focusing on the unthinkably huge heaps of evidence in favor of evolution, the innumerable signs by which the fact of evolution has left its heavy footprints on all of reality, the uncounted observations that discriminate between the world
f859e00a-58b6-4320-9dd7-14db01073861
trentmkelly/LessWrong-43k
LessWrong
I’ve written a Fantasy Novel to Promote Effective Altruism Back in February I got a grant through the extended set of ACX grants to write this. I now have a draft that is sufficiently polished that I’m comfortable showing to the community, though I suspect there will be a lot of things that I will want to change based on feedback here before I start publishing it in venues that aren’t part of the EA memespace.  I’m planning to post this one chapter at a time here for the next month or so, though depending on what sort of feedback I get, I might at some point stop or just make a post that has the rest of the book in one chapter. The whole thing is also available in a google doc which I strongly encourage anyone who wants to give me detailed feedback to go to and leave comments on.  I especially would like feedback on the philosophical arguments in the text, with an especial emphasis on criticisms of EA ideas that you think are important to be addressed, but that aren’t brought up by any of the characters in the text. My plan is that after I’ve revised the book based on feedback received here, is that I will start publishing it serially on Royal Road and in the Spacebattles.com forums. After it is about half published on serial fiction websites, I plan to publish it on Amazon and make a website with the whole text posted. However this is a provisional plan — if anyone has good ideas on how I can help this novel find audiences that will like it and hopefully be influenced by it, I really want you to give me any advice or help that you can. I did set up a discord to talk about the project, but I’m really not a discord person, and while I’ll participate in any conversations that happen there, I’m probably not going to start them. The best way to talk to me privately about the text is to send me an email at timunderwood9 at gmail. Or leave long comments on the google doc.  Anyways, everyone, tell me what you think, and I hope that this work is at least entertaining for some of you.   Prologue “Even if we ignore the possib
8dd74b1a-c0e6-4c78-b926-0e4ee6d27860
trentmkelly/LessWrong-43k
LessWrong
Meetup : Bratislava Meetup XIV. Discussion article for the meetup : Bratislava Meetup XIV. WHEN: 30 June 2014 06:00:00PM (+0200) WHERE: Bistro The Peach, Heydukova 21, Bratislava The usual place and time. Topic: going meta -- how to get more value from our meetups. Zvyčajné miesto a čas. Téma: ako z našich stretnutí získať viac? Discussion article for the meetup : Bratislava Meetup XIV.
928111b6-c44a-4d9c-a82d-241c932b30d0
trentmkelly/LessWrong-43k
LessWrong
Cheerful one-liners and disjointed anecdotes It would be good to have a way of telling people what they should expect from jobs - especially "intellectual" jobs - they consider taking. NOT how easy or lousy the work is going to turn out, just what might happen and approximately what do they have to do, so that they will decide if they want this. When people ask for job-related advice, like "how does one become a programmer?", they are often told short and generally useful things, like "learn to code". And maybe also "it is really easy [if you aren't bad at math]". On the other hand, there are all these un-generalized stories of failures which are always just a bit different from the asker's own situation and prospects. There are a lot of them, but they are not actionable; and indeed what can you conclude from them - "it's not easy and don't learn to code"? Were anyone to ask me "how does one become a plant population biologist?", I'd say "let's walk and have a look at what grows around here and you will tell me what you see". I don't think that "remember species of interest" will cover it. And then, if the person leans down, looks hard and keeps paying attention to the small details, I would share with them some stuff that has happened to me in the field (featuring Dogs, Tourists, Gatherers, Drunks, Heathens, Homeless people, Boars, an Elk, and the most dreaded beasts - the Landowners). Now, from the point of view of my second supervisor, plant population biology is easy. She began as a geobotanist and then turned to more or less taxonomy, and it takes... work. LOTS of work. Once I brought her my orchid counts, and she said something like "a kid in the Seventh Form can do this; it's not science", and I was deeply offended and upset but didn't disagree. Because technically, if one trained a kid in the Seventh Form to write some Latin, survey rigorously, and a couple other tricks, yes, it was possible that he would have obtained the same data. Perhaps even better data, if his eyes were sharper, for example.
7d2195d7-41e7-4854-a851-498c48a940ff
StampyAI/alignment-research-dataset/blogs
Blogs
New chapter in Cambridge Handbook of Artificial Intelligence *[![cambridge handbook of AI](https://intelligence.org/wp-content/uploads/2014/06/cambridge-handbook-of-AI.jpg)The Cambridge Handbook of Artificial Intelligence](http://smile.amazon.com/Cambridge-Handbook-Artificial-Intelligence-ebook/dp/B00JXII7JQ/)* has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF [here](https://intelligence.org/files/EthicsofAI.pdf). The abstract reads: > The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill. > > The post [New chapter in <em>Cambridge Handbook of Artificial Intelligence</em>](https://intelligence.org/2014/06/19/new-chapter-cambridge-handbook-artificial-intelligence/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
424bf6fb-4b0f-43dd-a8dc-44f2ed0b11dd
trentmkelly/LessWrong-43k
LessWrong
What are you reading? In my short-form, I write: > [...] This is way more obvious and way more clear in Inadequate Equilibria. Take a problem, a question and deconstruct it completely. It was concise and to the point, I think it's one of the best things Eliezer has written; I cannot recommend it enough. Just finished Inadequate Equilibria. Now, I'm reading: * The Big Picture from Sean Carroll (which seems a really, really good companion to The Sequences.) I'm at chapter 17/50, and I'm really enjoying it so far; it's an ambitious book though! * In fiction I picked up UNSONG from Scott Alexander; I am almost finished with the first book, and I love it (I liked most of his fiction, already.) What are you reading?
038d99ea-2079-47bc-96c4-87e9053de223
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes May 2016 Another month, another rationality quotes thread. The rules are: * Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name. * Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here. * No more than 5 quotes per person per monthly thread, please.
4daffeb3-1e68-4143-bdd7-88fc10e6eea1
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
In Defense of Wrapper-Minds Recently, there's been a [strong push](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX) against "[wrapper-minds](https://www.lesswrong.com/posts/dKTh9Td3KaJ8QW6gw/why-assume-agis-will-optimize-for-fixed-goals)" as a framework. It's argued that there's no specific reason to think that all sufficiently advanced agents would format their goals in terms of expected-utility maximization over future trajectories, and that this view predicts severe problems with e. g. [Goodharting](https://www.lesswrong.com/posts/rauMEna2ddf26BqiE/alignment-allows-nonrobust-decision-influences-and-doesn-t) that just wouldn't show up in reality.[[1]](#fneqzoggi740i)  I think these arguments have merit, and the Shard Theory's model definitely seems to correspond to a real stage in agents' [value formation](https://www.lesswrong.com/posts/kmpNkeqEGvFue7AvA/value-formation-an-overarching-model). But I'd like to offer a fairly prosaic argument in favor of wrapper-minds. --- Suppose that we have some agent which is being updated by some [greedy](https://www.lesswrong.com/posts/ThtZrHooK7En9mcZr/greed-is-the-root-of-this-evil) optimization process (the SGD, evolution, etc.). On average, updates tend to decrease the magnitude of every subsequent update — with each update, the agent requires less and less correction. We can say that this process optimizes the agent for good performance according to some reward function R.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , or that it chisels "effective cognition" into that agent according to some rule. The wrapper-mind argument states that any "sufficiently strong" agent found by this process would: 1. Have an explicit representation of R inside itself, which it would explicitly pursue. 2. Pursue *only* R, at the expense of everything else in the universe. I'll defend them separately. **Point 1.** It's true that explicit R-optimization is suboptimal for many contexts. [Consequentialism is slow](https://www.lesswrong.com/posts/a3LncviZ6rkrTo8jJ/the-unified-theory-of-normative-ethics), and shallow environment-optimized heuristics often perform just as well while being much faster. Other environments can be just "solved" — an arithmetic calculator doesn't need to be a psychotic universe-eater to do its job correctly. And for more complex environments, we can have shard economies, whose collective goals, taken in sum, would be a strong proxy of R. But suppose that the agent's training environment is very complex and very diverse indeed. Or, equivalently, that it sometimes jumps between many very different and complex environments, and sometimes ends up in entirely novel, never-before-seen situations. We would still want it to do well at R in all such cases[[2]](#fngaicze5g0a4). How can we do so? Just "solving" environments, as with arithmetic, may be impossible or computationally intractable. Systems of heuristics or shard economies also [wouldn't be up to the task](https://www.lesswrong.com/posts/rauMEna2ddf26BqiE/alignment-allows-nonrobust-decision-influences-and-doesn-t?commentId=DTF4fkK4v2yfRzPGC) — whatever proxy goal they're optimizing, there'd be at least one environment where it decouples from R. It seems almost tautologically true, here, that the only way to keep an agent pointed at R given this setup is to *explicitly point it at*R. Nothing else would do! Thus, our optimization algorithm would *necessarily* find an R-pursuer, if it optimizes an agent for good performance across a sufficiently diverse (set of) environment(s). **Point 2.** But why would that agent be shaped to pursue *only* R, and so strongly that it'll destroy everything else? This, more or less, also has to do with environment diversity, plus some instrumental convergence. As the optimization algorithm is shaping our agent, the agent will be placed in environments where it has preciously few resources, or a low probability of scoring well at R (= high probability of receiving a strong update/correction after this episode ends). Without knowing when such a circumstance would arise, how can we prepare our agent for this? We can make it optimize for R *strongly*, as strongly as it can, in fact. Acquire as much resources as possible, spend them on nothing but R-pursuit, minimize uncertainty of scoring well at R, and so on. Every goal that *isn't* R would distract from R-pursuit, and therefore lead to failure at some point, and so our optimization algorithm would eventually update such goals away; with update-strength proportional to how distracting a goal is. Every missed opportunity to grab resources that can be used for R-pursuit, or a failure to properly optimize a plan for R-pursuit, would eventually lead to scoring bad at R. And so our optimization algorithm would instill a drive to *take* all such opportunities. Thus, any greedy optimization algorithm would convergently shape its agent to not only pursue R, but to *maximize* for R's pursuit — at the expense of everything else. --- What should we take away from this? What should we *not* take away from this? * I should probably clarify that I'm not arguing that inner alignment isn't a problem, here. Aligning a wrapper-mind to a given goal is a very difficult task, and one I expect "blind" algorithms like the SGD to [fail horribly at](https://www.lesswrong.com/posts/ThtZrHooK7En9mcZr/greed-is-the-root-of-this-evil). * I'm not saying that the shard theory is incorrect — as I'd said, I think shard systems are very much a real developmental milestone of agents. But I do think that we should very strongly expect the SGD to move its agents *in the direction of* R-optimizing wrapper-minds. Said "movement" would be [very](https://www.lesswrong.com/posts/ThtZrHooK7En9mcZr/greed-is-the-root-of-this-evil) [complex](https://www.lesswrong.com/posts/kmpNkeqEGvFue7AvA/value-formation-an-overarching-model), a nuanced path-dependent process that might lead to surprising end-points, or (as with humans) might terminate at a halfway point. But it'd still be movement in that direction! And note the fundamental reasons behind this. It isn't because wrapper-mind behavior is convergent for any intelligent entity. Rather, it's a straightforward consequence of every known process for *generating* intelligent entities — the paradigm of local updates according to some outer function. Greedy optimization processes essentially search for mind-designs that would pre-empt any update the greedy optimization process would've made to them, so these minds come to incorporate the update rule and act in a way that'd merit a minimal update. *That*'s why. (In a way, it's because greedy optimization processes are *themselves* goal-obsessed wrappers.) We wouldn't get *clean* wrapper-minds out of all of this, no. But they, and concerns related to them, still merit central attention. 1. **[^](#fnrefeqzoggi740i)**Plus some more fundamental [objections](https://www.lesswrong.com/posts/XYDsYSbBjqgPAgcoQ/why-the-focus-on-expected-utility-maximisers?commentId=a5tn6B8iKdta6zGFu) to utility-maximization as a framework, on which I haven't properly updated on yet, but which (I strongly expect) do not contradict the point I want to make in this post. 2. **[^](#fnrefgaicze5g0a4)**That is, we would shape the agent such that it doesn't require a strong update after ending up in one of these situations.
c325c2a1-ebea-497b-806b-98b1efd96d96
trentmkelly/LessWrong-43k
LessWrong
Edinburgh LW meetup Saturday 4th of June, 2pm Another meetup at the Delhi cafe on Saturday at 2pm (67 Nicolson Street - see http://goo.gl/Wwws0). No set topic really this time, though suggestions are welcome (for now and for the long-term). Not sure what kind of things would work best in maintaining interest in these... I would personally be somewhat interest in the mindfulness practice as well discussing the somewhat associated recent post by Kaj Sotala. I will have the book on the Flow with me:
a729f681-3713-4ae7-b1f8-2581f6ca9f1f
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"This is a story about an odd fact about capital project decision making in engineering I noticed and how it might be related to cognitive biases Background Although I don't work in the field, I was trained as a chemical engineer. A chemical engineer's job is a little different than you might imagine. A chemical engineers primary job isn't to design chemical processes, they actually do relatively little chemistry, but to build, optimize and maintain industrial plants that produce chemicals (petrol products, cleaners, paint etc.) and materials that are produced similarly to chemicals (wood pulp, composite materials etc.). Questions similar to 'how fast should we mix the fluid in this reactor to make it most efficient?' or 'how can we reuse the waste heat from this process?' are much more common than questions similar to 'how can we create compound A from compound B?'. Chemical engineers often have to make decisions about what capital improvement projects the firm will undertake, so they must answer questions such as 'install cheap pumps that wear out quickly or the expensive ones that don't?', 'what ethanol producing bacteria is most efficient for producing ethanol?' and 'is it worth it to install a heat exchanger to recover the waste head from this process or not?'. The standard technical way of judging the profitability of an option or project is to calculate the Net Present Value (NPV) of the expected cash flows to and from the firm for each different option (installing pump type A or B, using bacteria A, B or C, installing or not installing a heat exchanger). The option with the highest NPV is the most profitable. Calculating the NPV discounts future expected cash flows for the fact that they occur in the future and you have other productive things you could do with money, such as earning interest with it. Oddly high discount rates When I was in school, I noticed an odd thing: the interest rates that people used to evaluate projects on this basis, called the Minimum Acceptable Rate of Return (MARR), were often rather high, 15-50%/year. I saw this in textbook discussions and had it confirmed by several working engineers and engineering managers. My engineering economics teacher mentioned that firms often require two year "pay back periods" for projects; that annualizes to a 50% interest rate! I was very confused about this because a bank will loan a small business at ~8% interest (source) and mortgage rates are around 5% (source). This implied that many many industrial projects would be profitable if only outsiders could fund them, because investors should jump at the chance to get 15% returns. I know I would! The profit opportunity seemed so great that I started to work out business models around alternative sources of investment for industrial projects. Overestimating benefits, underestimating costs To understand why MARRs are so high in chemical engineering, I tried to find research on the question and talked to experienced engineers and managers. Unfortunately, I never did find research on this topic (let me know if you know of relevant research). I talked to several managers and engineers and the most common answer I got was that investors are short sighted and primarily interested in short run profits. I didn't find this answer very plausible. Later, I met an engineer in charge of reviewing project evaluations made by other engineers in order to decide which projects would be approved. His explanation was that engineers usually overestimate the benefits of a project under consideration and underestimate the costs and that they gave engineers high MARRs in order to counterbalance this. I asked him why they didn't just apply a scaling factor to the costs and benefits and he explained that they did this a little bit, but engineers respond to this by inflating benefits and deflating costs even more! I later met another engineer who talked about doing exactly that; adjusting estimated costs down and estimated benefits up because the process evaluating projects did the reverse. One thing to note is that if engineers overestimate benefits, underestimate costs uniformly over time, then a high MARR will make projects which pay off in the short term artificially attractive (which is why I asked about using a scaling factor instead of a large interest rate). On the other hand, if engineers make more biased predictions about costs and benefits the further out they are in time (for example, if they tend to overestimate the productive life of equipment), then a high MARR is a more appropriate remedy. Possible explanations There are a couple of reasons why engineers might end to overestimate the benefits and underestimate the costs of projects. Any number of these may contribute. I suspect cognitive bias is a significant contributor. Confirmation bias suggests engineers will tend to overestimate the benefits and underestimate the costs of projects they initially think are good ideas. The head project engineer I spoke with described a common mindset thus, 'And this is why we tend to focus on the goodness and diminish the badness of projects. We know they are good so all we need to do is prove it to get the approvers bought in. Then the project approvers over time notice that these projects returns are lower than expected so they say, “Let’s raise the bar.” But guess what? The bar never rises. Why? Because we still "know" what the good projects are and all we need to do is prove they are good.' The planning fallacy suggests engineers will underestimate completion times and costs. Overconfidence suggests engineers will underestimate costs even when explicitly accounting for uncertainty. Bad incentives: engineers may often be rewarded for spearheading projects and not punished commensurately if the project is not beneficial so that they often expect to be rewarded for spearheading a project even if they don't expect it to be a success. Addendum: The same head project engineer suggests that one way to get better predictions, at least with respect to project duration, is to have non-technical observers make the predictions (related to taking an outside view). Anyway I started asking our cost coordinator about predicted schedule and she is by far more accurate than the engineers with how long it takes to do a project. That has led me to think that an independent review would be a good step in project returns. Unfortunately, I have not noticed her to be any better on predicting project performance than the engineers. On adjusting predictions based on a track record The problem with predicting a project will take longer than expected based on experience does not help because managers (usually engineers) want to know "why" so the can "fix it."" The above is a chunk of text from a post created by a user on the website LessWrong. Could you delve deeper into the role of cognitive biases like confirmation bias and overconfidence in the decision-making process of engineers? Additionally, can you propose a more effective way to counteract these biases beyond raising the MARR? Cognitive biases, such as confirmation bias and overconfidence, can significantly impact the decision-making process of engineers, particularly in capital project decisions. Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses. In the context of engineering, if an engineer initially believes a project is a good idea, they may unconsciously seek out information that supports this belief and ignore or downplay information that contradicts it. This can lead to an overestimation of the project's benefits and an underestimation of its costs, resulting in skewed decision-making. Overconfidence bias is the tendency to overestimate one's own abilities or the accuracy of one's predictions. In engineering, this can manifest as an engineer underestimating the costs or time required for a project, believing they can complete the project more efficiently or cheaply than is realistically possible. This bias can also lead to an overestimation of the project's benefits, as the engineer may be overly optimistic about the project's potential success. To counteract these biases, it's important to incorporate checks and balances in the decision-making process. This could include involving multiple stakeholders in project evaluations, promoting a culture of constructive criticism and open dialogue, and implementing robust project review processes. For instance, an independent review board could be established to evaluate project proposals. This board should be composed of individuals from various backgrounds, not just engineering, to ensure a diversity of perspectives. This could help to identify and mitigate the impact of cognitive biases on project decisions. Moreover, decision-making processes could be improved by incorporating techniques from behavioral economics. For instance, 'nudging' is a concept where small changes in the way choices are presented can significantly impact the decisions people make. In the context of engineering decisions, nudging could involve presenting information in a way that encourages engineers to consider a wider range of potential outcomes, or to think more critically about the costs and benefits of a project. Additionally, training in cognitive bias awareness and mitigation could be beneficial. This could help engineers to recognize when they might be falling prey to confirmation bias or overconfidence, and to adjust their decision-making processes accordingly. Lastly, implementing a system of post-project reviews could be beneficial. By analyzing the outcomes of past projects, engineers can gain a better understanding of where their predictions were accurate and where they were off the mark. This could help to calibrate future predictions and reduce the impact of cognitive biases on decision-making. How can the concept of 'nudging' from behavioral economics be specifically applied in the engineering decision-making process to mitigate cognitive biases? 'Nudging' is a concept from behavioral economics that involves subtly guiding choices through changes in the way options are presented. It can be a powerful tool to mitigate cognitive biases in the engineering decision-making process. Here are a few ways 'nudging' could be applied: 1. Framing: The way information is presented can significantly influence decisions. For instance, presenting the same information in terms of potential losses rather than potential gains can lead to different choices due to loss aversion. In engineering, this could mean framing decisions around the potential costs of not undertaking a project, rather than the benefits of doing so. 2. Default Options: People tend to stick with the default option, especially when faced with complex decisions. In engineering, this could mean setting the default option to the most cost-effective or sustainable choice. Engineers would have to actively choose to deviate from this option, which could encourage more thoughtful decision-making. 3. Choice Architecture: The order and manner in which options are presented can influence choices. For instance, options presented first or last are often more likely to be chosen due to the primacy and recency effects. In engineering, this could mean presenting the most desirable options (from a cost, efficiency, or sustainability perspective) first or last. 4. Feedback: Providing feedback on past decisions can help individuals make better choices in the future. In engineering, this could mean providing feedback on the accuracy of cost and benefit estimates for past projects, which could help engineers make more accurate predictions in the future. 5. Peer Comparison: People often look to the behavior of others when making decisions. In engineering, this could mean sharing information about the decisions made by other engineers or firms in similar situations. This could encourage more careful consideration of all available options. Remember, the goal of nudging is not to dictate decisions, but to help individuals make better choices by highlighting important information and structuring choices in a way that mitigates the impact of cognitive biases.
057321b4-d002-4ce8-a175-717ab215fa0f
trentmkelly/LessWrong-43k
LessWrong
DALL-E by OpenAI This is a linkpost for https://openai.com/blog/dall-e/ My own take: Cool, not super surprising given GPT-3 and Image GPT. I look forward to seeing what a bigger version of this would do, so that we could get a sense of how much it improves with scale. I'm especially interested in the raven's progressive matrices performance.
223f7f30-18ce-4bda-9ef2-e605d32fcd5e
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Superintelligence Mod for Civilization V hi everyone so today we're trying something a little bit different because I read that there's recently been released a mod for the civilization games which includes AI as a as a available technology so they've changed the game so that you can actually produce super in charge in AI looks really interesting and this is actually come from the Center for the Study of existential risk at the University of Cambridge the The Verge has a nice interview I'll put a link to this in the description a nice interview with dr. Shah Ravine who is the researcher who is managing the project and yeah so I thought what we do for this video is have a kind of a let's play I'm new to making Let's Plays and I'm actually kind of new to playing civilization as well I didn't play it as a kid so I've got a friend to come and help me with installing it and configure it and everything like that oh it's it's dr. Shah ravine from the Center for the Study of existential risk okay let me help sir oh those those silly but I like it okay so I've got I've got steam here I've installed say five which version of safe over there like regular one just the base game Oh vanilla okay the one so how do I install this mod so you need to go into the Steam Workshop I got this browse the wall krumping yep and then I mean what kind of lucky that will quite popular at the moment and so you can click on superintelligence that's us yeah that's the Caesar logo this version is for people who have the brave new world expansion okay yeah but that's not what you have so you can scroll down and there is a link this everyone it already get to the vanilla version with the basic game okay cool and then you just tip subscribe just okay so let's let's play okay so um my understanding of this game is you start from like the Stone Age or something right you start like prehistory beginning of humanity type thing and you worked your way up so this mod presumably only actually has an effect towards the end of the game right yes and we even say in the mode description that we recommend if you're just trying to take a kind of taste of the mod then you should start in kind of the latest age just before the thing kicks in which for vanilla would be in the modern era so we can show how to do that yeah so me to go into mods no once installed up and this there it is so it's in this one well you just need to click the V in a bowl okay okay that's it thank you I want to be with you on I think we should be England we should be the UK who's that you just passed us Elizabeth did I oh yeah right cool that's us here we are game era you want to change that to modern money so future you would have a look at this Cup a little thingy I gotcha but starting in modern is the way to do this you uh-huh I'm really nervous now because usually people when they watch a let's play they want to watch someone who's like really good at the game win the game and I don't know what I'm doing at all and I think it's gonna be terrible and I hope people are okay with watching me fail to save the world from rogue super intelligence because that's probably what's gonna happen right the stone is also good for production but I think at this point not as important only one military you bloody hell yeah I figured that stuff out thank you okay I actually have a guy here let's see okay okay go over and say hi it's al Almaty they give us 30 gold thanks man feel bad they gave me 30 gold I can't even I don't even know how to pronounce their name of the city so these are the things that we can currently research yep and if you look a little bit into the future I can see all of this stuff we edit so is this this is a new one yep okay so I'm of course Ricky is fine with it ah they are folk analogy thisis nice which was my last video so that's good it allows you to build the AI safety lab to reduce the chances of rogue super intelligence eliminating the human race wouldn't that be nice yeah that sounds great okay so the settlers are gonna establish the city here hmm it's Nottingham hey oh I see we've built Nottingham and now we're unhappy well social policies gave us more out of more happiness from universities than we would otherwise get yeah that's all of the embodied Hill so doesn't usually give you a first one happiness really maybe they should know you'll knowledge me I have to do it I have to build nothing University and I can build a computer file and then my own channel and it's gonna be great it's Dublin hey Dublin are militaristic but friendly I don't have any comment about that it's tyre they're militaristic and irrational this city state as well its Belgrade thank God oh well uh okay I got the other one wrong as well again militaristic but friendly everyone is so militaristic Oh Alexandra see me Dante oh hi Riley terrible I don't know I uh I've heard him called Alexander the Great if it's the same guy I think it is the same guy maybe this is his cousin okay so horses I guess we don't have horses or any horses nearby any of our cities right now different though admittedly we all also in the modern era say we don't really care about horses they have become obsolete that's like you guys will soon become oh yeah I keep forgetting that so what is this guy doing I'm cosplaying what when I was little are you uh I mean right back at you Montezuma he's got the crazy eyes also the crazy held yeah another herb just had a deal crazy hat I mean I like his hat I have no criticism whatsoever about the Hat Montezuma the terrible everybody is terrible I mean you look at this dick hole the thing I like is that he's friendly I would hate to see him when he's pissed up I'm gonna move from it like around here so I can just see more I've discovered the Grand Mesa yay and it's now we're actually happy everybody's pleased about this a big rock may be good wait hang on let me have a look at it well it's a pretty big rock say oh it's oh yeah it's taking a while to load no don't Washington okay it's his earnest hope that our two people can live side by side in peace and prosperity that sounds good I just hope yeah no sugar for cotton one for one happy with that yeah there's a sugar is a luxury happiness is good we need that too Artie we are the British perfect yeah there it should make tea as a that's probably mod that does that tea you can't grow tea no this game is just de fraud so in order to be successful in AI in the late game mm-hmm what are good things for a civilization to have so first of all you want to make sure that there is enough safety a safety discipline being done it requires a safety Labs when you can only have one pill city right so having the feel number of cities is not bad okay having a bunch of alliances with city-states gives you access to the AI research which is not bad okay yep that makes sense so like if there are safety researchers in Dublin and we're friends with them we can use their safety research yeah and there and they previously searched as well okay of course you want to have a good science base we discover all the technologies that speed up your research mm-hmm so you know how citizens get made how the citizens get made you know we can I get it get into it oh well yes but really what happens is you have lots of apples and then your little circle gets full of apples and then those apples turn into a citizen ah and then your circle gets emptied of all the apples and you need to start over again yeah that's that's pretty much how I remember it from biology hi Ischia are you aware that your city is on fire maybe you should attend to that first or is this somebody else's city I think I mean they do get triple gold from pillaging cities right let's focus on what's the city okay yeah you don't shrink for more unnecessary or fun or expedient okay again friendly not really seeing it oh and he was fugly he was friendly I had a I you know I said I had a bad feeling about him must be of this cult all right so I get to choose between saying you'll pay for this in time well very well which do you think is more foolish I mean the second one but like does this have any game impact no pretty okay well that's that's nice definitely need to make sure that we are making some units yeah you see that's the thing you have limited amount of resources yeah Montezuma shows up and you can't spend it on me it's sort of symbolic the way that the biggest the game that deep mines dqn stuff has the biggest challenge playing is Montezuma's Revenge I haven't thought a bit but it's and now Montezuma screwing with Rai stuff may be somewhat beautiful damn you Montezuma we will have our revenge slightly excessive pulse I don't like how militarized this whole area has become has become quick Meadows where you can kept other local oh they move closer that was really nice of them idiots wonder if anybody comes we're going to declare war on us I hope not oh these guys oh yeah taking everything on in fact they can kill off those that's hilarious yep because railways are so cheap to move on yeah it's true but they represent it by running extremely fast I guess we need soldiers yeah you can be bombers yeah I'm gonna screw up on Tajima with bombers Oh has it really oh wow long as we give him all of our stuff that's Hillary I love our stuff that's really an amusing offer it is I think that's are you tempted uh you know sorry I just look in his face with what I think is not making eye contact because he looks super distracted what mm-hmm what's over there what's over there I think it just looks shifty yeah I think that's like I am bluffing I don't know if you know I am bluffing Baba I like bombers hang on are we already building a bomber in Nottingham yeah but you can go up to as much oil as you have and we've got plenty of oil yep yeah I like bombers - they're cool but I'm really annoyed that we have to do all of this crap like military stuff instead of it just feels wasteful you know livings just burning a bunch of resources on each other I mean listen that's what we do yeah what maybe he should sue for peace in a way that isn't just give us everything you own really really that's mean and it's just something necessary why oh okay but that's that's not good what did I do okay oh that's not aggression leaves us no choice what are you talking about 1:10 aggression what's he talking about I don't know I think he just wants to do some done Cup and he's making up excuses none of them like ends whatever do doesn't my life no that's not a thing that happens but this is you know this is Washington the terrible he's become four terrible he has become though he was a doji before yeah they aren't so warmongering what in the hell is going on I mean to be fair when we went around everyone was militaristic and that should have been a clue I don't like this okay very well okay well he has a lot of troops around all cities yeah we're really in trouble yeah yeah we are let me just is anybody not at war with us ask you I don't even like them I mean oh we screwed at this point I mean we are not gonna get to suit to a is not anytime soon but no one else is either so this can just play out and then we get back on that back on track yeah okay all right wartime I mean it is 1943 you kind of expect there to be a world war going on but it's not a world war at everyone at war with us that doesn't count as a world war is there a way to know if any of these guys are at war with each other and I think we would have known I think they are not lunders made Obama good job London does that mean we have two bombers in under now yes nice buy although production so so I don't know how did we find that out oh into the city huh because they've taken all of the people and stuck them in the university and in the public school yes if you take them out of the university in the public school and back into the mines I mean it's war dig for victory now to see units here and do ships which probably means we can just get rid of both of them meat this is a slightly confusing game to play sometimes yeah okay okay hang on there are no more offensive units no Canton boy yeah there was well done that's hilarious man naval superiority is kind of a laugh yeah the initial four or Greek troops just didn't do anything yeah luckily for us it's embarrassing to be honest yeah please get rekt I like that it's like a Lancaster bomber yeah I gotta say I remember the second world war going quite differently I don't think we were at war with the u.s. Oh what with the u.s. kind of on the outskirts of Nottingham not far from Dublin right oh good nice okay all right okay I'm feeling this worried about this woman yeah I mean we are still at war with literally everyone we've ever met except somehow except Songhai for some completely unknown reason the like by far most warlike looking of all of them maybe except one does it yeah with the skulls hidden he maybe should realize he's the baddies it's like literally covered in skulls and from that wall of skulls and it's like so be sure that we're on the side of the good here right what as though those soldiers outside of toxic land and soil found them what what is the Greeks doing all the way over there all right fine yeah artillery get him good it's always funny when I tell her he just completely destroys one guy I'm sorry random American soldiers it's not gonna be your day they should not have the Cold War it did seem unnecessary but we do now have a stupid amount of like a power so yeah maybe people just cutoff ladles hmm that guy's scary let's piss him off okay go over there and finish it you're cowards and if it'll make you feel better a great general is gonna help you out oh my god he gets a little Jeep yep that's adorable and a flag and a flag he's driving a Jeep on the railway tracks yep he doesn't care he's a great general don't you gonna listen yeah [Music] yep please continue getting read oh yeah it's not off those bonds didn't even explode I think they literally just physically hit them with the harmless he's not even it's thinking about yeah nice this is still so wasteful though it's completely consumed all of our resources for like many turns true everything's not even 45 it's time to go outside so I can to make peace oh yeah do you think they'd do that deliberately No Oh what there's my whatsapp is pretty interesting I accept that deal that seems reasonable very reasonable you're like randomly yeah back to being friendly what in the hell is up with Montezuma are you kidding me what weakness I have like seven bombers you have a sword what are you doing you think we should not have trusted the guy with the burning citizen like oh maybe listen Askia if I'm honest I've never heard of Songhai I know this is insulting what is that what is where in the world who was Songhai I I feel this I mean I'm revealing weaknesses in my history education what is where is do you know that no no well ok we'll deal with this guy we're very very well well jolly good show more Wars that's fine you know we don't mind a bit of a war and now Florence is declare war on us discover computers finally a great scientist has been born in the city of Nottingham how about that what is the name of the scientist in Nottingham where's my scientist show me my scientist Pharisee is behind those scientists often out hiding behind your Hilary there is no doesn't it well the general here well it's like of course oK we've got five minutes he can rush the technology come let's look at the tech tree so we got computers no now we can rush I yep that's good discover technology yeah and then you have that golden science file okay right right right right and I can just oh wow I could get any of these things any of these things but I think I is the most powerful yeah and it's do it alright we're in it doesn't win the game the question of whether machines can think is about as relevant as the question of whether submarines can swim okay lets us build an AI lab good so maybe I should explain the technology and we'll give intelligence it's like well what's the point you'll have discovered the fish intelligence so that he said it's the field of artificial intelligence so it's something like the tearing paper in mind or the ran of Dartmouth summer school it's like hey we have computers now maybe we can use them to solve this thing maybe they can think yeah like a submarine yeah so we've got AI if we can also get robotics and then we can get the orthogonality thesis yeah which is necessary if we want to build the AI safety lab because right now we can build the AI lab but that's just gonna cause a bad outcome how many safety research having discovered artificial intelligence your researchers may now start working towards super intelligence manage your research through the artificial intelligence screen you guys made a whole screen yeah Wow you can click the thing and it will bring up the screen yeah okay so a I research level is zero because we haven't even finished our first AI lab mmm from local research also zero from open research by others also zero and I don't know who's gonna help us because America is at war with us the only people not at war with us are the Aztecs yeah with even people who are at war with you if they publish uh-huh you can pick up those so a treaty only I would share all research and guarantee that values of both civilizations are built into the AI being developed I haven't signed any treaties yeah cuz everyone but want to zoom is it war with me I think you said in the in the interview one thing you've discovered is that if everyone's at war ai cooperation becomes diff yeah maybe we should just find out that you can get to that screen both on that AI count though that has now appeared on your top oh yeah so you can kind of quickly see that and it's also if you look at the kind of menus so the little scroll icon yep yeah and then AI is ah cool okay if you click on the thing before then it just tells says AI has not been developed yet okay so it's time to build some AI labs and some AI safety labs well I mean it's time to maybe not be at war with literally everyone I don't know man you've done nothing to us of consequence apart the worst thing you did was when that destroyer destroyed my infantry that was annoying Montezuma nothing else you've done do you remember when I steamrolled at three of your units on the ocean and never mind even his horse looks at me it's not the time for negotiation alright alright I guess we're gonna have to kick some ass you will never get horses it's never gonna happen well I sued for peace man he didn't want it it is true so London is not making that London's gonna make an AI lab do you think it's realistic for those of you never been London well I mean for realism Nottingham should have the first one but then after that I think that could be won in London so this is an idiot but okay I mean I admire your guts just spraying all over the jungle oh wow what he's the father war ah I'm inclined to accept I see no reason I've always wanted horses now we have all of them like Alexander he's bound to lose now oh yeah yeah for sure yeah oh he's a dodge again yeah I'm technically I'm still at war with Greece he's just rubbish at it yeah why do people keep declaring war on me when it works out terribly for them every time they'll fight you'll win well I should just deal with it ah another ball what is our economic adviser say a windmill really oh we could build green's windmill it's it's in Nottingham there's a windmill mm-hm and it's the science windmill it's gonna give us extra science I mean it will speed up building science buildings so you got that good building greens windmill okay since we're very pleased with that I am I used to live right near it on school it's like a little science museum oh I want these guys to build well they know well so that we can have more battleships no wait we are still we're technically still at war with Greece yeah I feel like Alexander would be like deeply offended then I keep forgetting we're at war with him just like oh yeah we are at war with you it's just like it not doing anything threatening Nottingham with a windmill it built the windmill yes we have greens windmill good now all of our science is gonna be extra sciency yes huh [Music] sure yeah big idiot ah okay the world is around you finally becoming year's slightly less polluted yeah yeah bombing this out of people until they stop being so damn belligerent okay it's it's Louie de Guerre it is inventor of the photography wasn't he well clearly he should be rushing deep blue when the time comes do de Guerre French artist and photographer recognize taste invention at the daguerreotype process of photography neat okay he's a photographer he's in Nottingham does it matter where deep blue is built and well then he deep blue is built in Nottingham yeah yeah rewriting history sleeping bed until you discover day for money okay alright we're researching the orthogonal T thesis you know we put all of this extra information in the civil of media so if you don't know what the [ __ ] man this is either way if anybody didn't know what what the Greeks they came up dammit yeah so if you didn't watch my most recent video about the ocean canal t thesis uh-huh you could watch that or we can look in the hope they go the dicks okay stuff yep modern era Syria so this is a technology an area of research that lets you build the AI safety lab to reduce the chances of rogue superintelligence eliminating the human race in Cleveland has this that's pretty great if statement of the thorn ivy this is from Brian intelligence and final goals are orthogonal axes along which possible agents can freely marry in other words more or less any level of intelligence could in principle be combined with more or less any final goal okay how's that for a hidden message in a mod it's very subtle yeah if you think this is important I mean check out this guy's channel we may watching a video on my channel so that is true so we also have descriptions for artificial intelligence yeah this is really nicely done a couple of mr. Shafi actually hmm yeah he deserves credit though you need data mining to build deep blue mm-hmm so that's good let's get some of that how are we doing on our tech tree we've just started on the other naughty thesis yeah so does that mean I need to collaboration yeah no all the way back to the that's so I actually need ecology yeah but before that you need penicillin in the plastics oh no yes so all of that ah I thought I was so close to like cracking the code not quite doing it but it turns out well you see so in fact after AI if you want you can just drop off agonizing theses and go down the other word to get capability research if you just trust someone else to do safety for you so you went kind of from AI and down to whole buttocks to automatically sees mm-hmm but after a yeah you could have just gone back kind of to penicillin plastics ecology globalization data mining mmm so you could do just AI capabilities with never bothering safety right because you just trust that other people will handle it for you yep mmm I don't trust anyone else because literally everyone literally every other sieve has at some time or another declare war on me for no reason yeah this is not a publicly owned plastic world yeah so I feel as though I find gonna do AI right I got to do it myself it's gonna be a made in Britain AI oh oh oh oh so those things maybe they'll be friends with us maybe maybe they'll declare war on us for no goddamn reason I want to build in fact I'm gonna do it just because I like the idea of the Statue of Liberty being in York so I'm actually a peace with these guys no that's a novel now I'm actually a piece now with more people than I'm at war with yeah so feel good about that where is Russia anyway I don't know we haven't found them we freakin me I feel somewhat stealth Russia who gets over here nobody knows for sure okay I mean somebody big what is Russia we just don't know okay okay so Hastings needs to decide what to do yeah I think they should have an air lab as well of course everyone should have an atom yeah one for you and one for you everybody gets an AI lab now I'll keep him guarding because okay the Russians could come from anywhere because they don't know where they are so so just head up head off and head off that way okay I think we found the Russians ah there it is there's the border yeah okay well that's one way to find whoa ah they were coordinating with each other Russia and the Greeks decided to do their big attack all at once huh that's cute I'm really tempted to do is take these horse people though so as I could do it in one turn with these you know what I'm gonna cuz that's gonna really really annoy him we're ten percent of the way towards AGI so the risk going so we're doing 84 of it we're doing actually all of the AI research aren't we no it's just of our research it's all I'm getting from any gotcha cuz everyone's at war with us danger of rogue super intelligence is 98 though yeah so there's somebody else out there is doing AI research yes and it's weird that we know that because we don't know that right yeah that's true but you kind of it's really bad flinging bad conditions on players without letting them know it's happening yeah so I mean in reality we have no idea how close superintelligence also super intelligent least correct having a clear number that you're aiming at is yeah nothing like reality but it makes a lot of sense from a game mechanic design yes and kind of in general in the mode you kind of have to go I will realistic this is important to us to capture light or this is gone if we do this way it's not gonna be fun to play right yeah that's the same thing I think people people they ask like okay so when is it gonna happen how long is it gonna take where are we right everybody wants to believe that we actually know how far away we are it's unknown unknowns right yeah if we knew exactly what the problems were that we needed to solve and how long it would take to solve them we would already be most of the way to solving them that is true mmm-hmm I mean looking at record of humanity and we are not very good at forecasting technology progress that's articularly for things that are brand new my yeah look at development of nuclear weapons you had some people kind of actively working on it other people saying it's never gonna happen yeah applications of electricity I mean go back as far as he wants to really transformative technologies if you have a rough idea of how it's going to walk then you have some timeline in your head but a lot of it depends on things that you don't know until they're gonna try them and if you don't have a timeline in your head then you just have some have some vague arguments about what about why it's never gonna happen yeah it's often the distribution of predictions that you have but here in the land of fiction and games we know that we're ten percent of the way there yep but whether the risk is outpacing our own progress right which it would make sense if a lot of people are working on it so to clarify the the rules here if this hits 800 before this hit hits 800 everyone dies we're screwed okay if we're trying to get AI right and specifically I safety what are the things we're gonna need so you gonna need their safety labs right if you can build one to discover Logan Rd thesis okay I think it's not very far in the future so what I'm gonna want once I actually have AI labs I want to have AI safety labs is research capability right so is there anything I can build now that will increase my research case we've maxed out on currently available tech you have the University in the public school yep and you have any scope of the finger lets you build a social object right but you will also want to have lots of population so you can have excess population to put as specialists so I want happy so that's okay happiness and food right so aqueduct and stadium or both relevant to that yeah and you would want to have money so you could put it into treaties and the safety fund gotcha all right well I'll go with the aqueducts then because it's super quick sure nice hey we've researched the orthogonality thesis the greatest task before civilization at present is to make machines what they ought to be the slaves instead of the masters of men Wow when was that written sugar look it up hmm oh so much sure we want slaves but we definitely want done for them to be masters like yeah I was thinking that like slaves feels very anthropomorphic i I envision well-aligned AGI as just wanting the things that we want so that we don't need to enslave it or control it it's free to do what it wants and what it wants to do is good things I can't really like their minds in the cultural novels oh yeah as some kind of if we get it right this is what it might look like we have this kind of other red wants what we want so these are novels by EMM X writing in banks they're officially recommended novels there you go yeah yeah I agree though the culture seems they have a good it's not perfect but it's sort of a plausible good outcome yes I mean as with anything this is beyond the pale or a transformative technology we don't know what the outcome is gonna look like but it's nice to have some positive vision that don't involve slaves or mussels right I agree so we can build a are safe collapse this is exciting times we're now at 102 ai and 118 which probably means it's not in a lab in a city-state but just minor accidents so we have made it so that whenever there is a research right you occasionally get some chance of extra risk being generated by any of the labs okay it's kind of someone forgot to on the set of tests before committing the code like what would be an example of the kind of thing you're thinking of so say there is a just a software bug in a common AI framework okay right yep it goes in there it goes undetected many years down the line either a human adversary decides to exploit that vulnerability or a system that's under an optimization pressure finds a way of exploiting the vulnerability mm-hmm so it's just a little bit more risk within your system because you haven't designed anything in advance to be as safe and secure as possible right so things like kind of terminating strings with nulls at the end other than having the length of the beginning opens up the whole category of of a flows right eating itself is not doesn't cause any harm but it just increases the risk of something bad happening followed down the line so in the real world then we probably have a huge amount of that because most people writing most software including i/o software are not are not making an extraordinary effort to make sure that their code is very secure and robust yes there is a paper Wu's name of which are not currently remember I think it had demons in it this one I'm just making editing work for myself now I'm not gonna do that okay yeah no but it kind of they serve a security vulnerabilities in common ai frameworks and they find the whole bunch of them interesting okay that's cool I will link to that paper if I remember to I guess just more bombardment mm-hmm that's fine and I'm gonna use it to screw over these horses because that's my number one priority right absolutely the people trying to capture the horse okay no I love horses that's a true Englishman I'm a friend to all animals in the AI screen I can get to either by clicking on the AI pocus ball or fondle you can now access the air safety fun ah so there are no AI safety labs in the world so I can invest in a city state to establish an AI safety lab do I have the money I do not have the money you don't have these are quite expensive yeah okay huh there's no point in this conflict continuing any further except see once quite a bit of your money once all of my money okay how about we just you know no it's no hoax alright I mean I've got a lot of bombers that I don't feel like selling and not enough Russians to attack so I'm fine with that York has finished its safety lab yeah okay okay let's go into the city yeah okay so you're seeing how all of these specialist buildings yep have spots next to them mm-hmm so you can manually assign people to walk in them so right now we have a guy in the factory and in engineering the factory yeah and nothing else everyone else is out in the fields producing stuff gotcha now both the AI lab and the air safety lab have a few specialists lots of I can't lie empty mmm so right now we are [Music] still ahead on rogue points yeah so I want to do more safety and less regular research right so I'm gonna just the fact that this is an engineer is that more powerful than just putting a dude in there like he's orange instead of green or whatever so good people walk the tiles in the land mm-hmm so that doesn't for example it's in everything five food into gold right he's a fisherman yes and he's a fisherman in a place that actually has reached unlike this guy who was a fisherman in a place that doesn't have any fish dude stop being a fisherman instead come with me and be a Fisher of a I know he's unemployed yeah you could just click the you don't like it you just click that thing yeah oh nice cool interesting a bunch of silver and gold I'm okay with that I guess they she's completely fed up with being at war right yeah also she has no more units on our borders do bomb yeah yeah we really did just destroy everything if you had any units I would say let's make war with Greece you know to just finish them off too but that's that's fine I accept yeah you're welcome idiot Oh actually I should I should talk to Greece before we get into this hmm yeah yeah yeah I think it is yeah I feel sketchy hey do we want anything else from them a research pact they don't have the money yeah cuz you've been spending it all on units that I immediately destroy no another woman what do you what do you want what do you want to end it all right I'm gonna bomb the hell out of spotted end I mean I don't see that you've really left me much choice it's funny they're prerequisites for the orthogonality thesis because you would think it would be you all you need us to just think about it a little bit hmm did we just destroy their Jets yeah we did nice so how do you think about the thought of my thesis is a bit hard without having thought about AI as a thing yet mm-hmm and also having you guys a thing but not having thought for the button politics very much I guess in so my phone calls or most aliens when you have these systems that argument to be in environments and doing various things sure yeah that makes sense I guess it's it's kind of interesting like it's very hard to know which ideas are obvious because there are so many things that seem obvious in retrospect you know I shouldn't be bombing this guy I should be taking these these horse people are the absolute priority what am i doing good yeah so imagine like early days of a guy that deals that you're just gonna hard code everything with such a system safety concerns are not obvious sure the system only does what you tell it to do yeah and that is still kind of received wisdom about about AI yeah but once you start thinking with robotics you realize it's something like it's enforcement learning hmm he's gonna be a lot more salient now we're gonna machine learning is only gonna come up much later in effective but I think the earliest this when you start mixing the idea of having computers think for themselves and realizing that this is gonna be much harder than just coding everything by hand right that actually there's a level of unpredictability there yeah which people didn't realize how early on yeah I think the maxim of kind of how things are easy and easy things are hard came from people who were very much working on robotics right yeah that's really true the things that seem easy to human beings are the things that we've been doing for so long that we're kind of like running in specialized hardware yeah they're easy because we don't have to think about it because they're things that we like highly optimized to do yeah and that includes some of our morality all right we just know that something will make a super embarrassed or just feel really guilty right yeah I guess that is like a low-level support for for morality it's instinctive yeah so it feels super obvious but it's not gonna be easy obvious to call this into an AI this thing that was put in place by a tremendously long process of evolution doing game theory with all of these specialized ad-hoc twists and turns and details finally sitting here chatting about morality as you're forming the outskirts of Sparta they're not people though yeah but there are relations of people and they have been mean to you they've been mean to me and they refuse to make peace yes and they are like not only are they simulations of people but they're like they're not accurate simulations of people either you know they're not high detail enough to have moral weight just want to look at him again hi hi I was going to his friend yeah that's his friend's face yeah so so are you going to cities on his side what oh right yes cities and then pick Dublin Dublin Dublin no you don't know you're really really into Dublin okay the Aztecs way into Dublin that's fair Dublin's nice I mean it's a nice place Susan whispers okay yes yes whatever Basin yeah yeah cause it's fresh yeah it actually tastes different though yeah oh I mean I could there any horses no no I'll leave him I'll leave him it's all kind I don't like killing civilians for no reason so um I feel super green Suns like being killed for no ism get out of the temple what are you doing getting the lab so what if we have to in the factory - in the university that seems sensible yeah maybe you had one in the Woodman so can we give any minute numbers yeah yeah that's greens one no we need somebody in that Oh you've caught up yes we may actually win this yeah I feel good about it does seem like no one else's in the race which makes it a lot easier to link yeah I feel terrible it's not the thing is it's not Sparta's fault that Alexander is an idiot I'm just gonna sue for peace one more time I keep saying that behind you huh ah finally thank you how long have we been at war with Greece I think since since the forties it's been 30 years it's a 30-year war make peace yeah you can do this with all the city-states now yeah I'm at war with Florence who I'm pretty sure I've never seen but fine let's make peace yeah cool thanks man and Katmandu yeah hey and stuck home wait what oh they're at war as well ha did I just make world peace yeah I think that's made world peace it was surprisingly easy I don't really understand the AI safety fund mechanic so it's just getting another safety lab so say you're going for challenge and you only have one city that means you can only have one safety lab which means that most you have passed six safety belt on nice he really not enough to balance two olds needs but you can use this funds to establish safe it absolve the world I see so if you're if you're low on cities but cash rich you can stablish safety lab somewhere else yeah and we finished researching ecology so we can get globalization yeah that's good and then night let's ask a data mining okay which almost makes sense oh and now that you build universities in all the cities you couldn't build Oxford University interesting if we had more budget I would have just changed its name to Cambridge but does it feel axis like odds for them I'm a fan as well not quite as good as Cambridge but not that bad like yeah yeah I I'm not gonna say bad things about Oxford but I prefer Cambridge day but I in the absence of Cambridge I think it's a good idea to build this because it gets +3 science and the for the contender free technology and we really could use both of those my a our researchers have reached the level of advanced AI the new AI manufacturing technique will give you a 20% increase to production in all cities well that's great I'm really on track to win this one then yep thank you how's risk yeah ai research level is higher than our rogue yeah and no one seems to be going over here what is satellites reveal the entire map that sounded great okay so you can get rocketry okay a good rule for rocket experimenters to follow is this always assumed that it will explode also plasterer yeah it really does yeah how you could build Apollo program ah well yeah you can't because we removed all of that from the game okay but that gets you a science victory yeah so basically we took the original science figures from the game which is you build Apollo program you build a spaceship you want you to office and our way mhm and instead we said you in a sense victory if you build an alliance of intelligence oh so you would notice that you can now build a smart defense grid improve the defense of the city must have advanced AI to build mmm-hmm increases risk of rogue super intelligence by 1 per turn ah so if we were still at war and we started trying to use our AI stuff in a military way that would give us a big advantage but also increased risk yeah so usually the defense buildings you kind of need to build them one after the other so you build walls and then you build the thing that comes afterwards I think it's a military base or something like that and they kind of stack up but walls give you plus for defense right small defense kids says forget all of that you can just oughta make your defenses you get plus fifteen to your defense which is quite a lot right but hey you can't build it until you've reach this threshold and B it does come with some risk right because you've got a whole bunch of military type stuff being software control then you're also researching more military type stuff yeah that makes sense oh he has wine hey and he has excess wine yeah maybe he wants cotton for it yeah you want some you want some where's my cotton want some cotton yay good and everybody's happy because we have wine again once more Britain has wine how long does much rejoicing that's a lot of mastic troops well I mean they're not technologically advanced curious about what the Aztecs are up to this is it mine is mi they're invading a race in Greece's land maybe they have open bottles with them though oh we have data mining that allows us to build people ooh that sounds good yeah I guess we're just being ok with Canterbury being surrounded by our Tech's I mean you have a lot of promise Hastings has finished their research lab yeah oh I could build a military AI lab yeah that gives you 30 experience for all units you must have advanced AI to build it it increases risk of rope superintelligence by 2 per turn yes oh this is even worse than a smart Defense Grid because now we're talking about offensive capabilities so this is the same there is the kind of box that leaves the allmovie deletes the military academy that gives you extra experience so you know how all of the indicators got one improved one upgrade as we created them yep it's because we started the modern age so you have a box in all of your cities right plus 30 means that you will get added to upgrades just as you create them yeah you know what this is like I feel like there's all kinds of game mechanics here that we're just not using by virtue of being too good if you hide too well and there are problems that we're just not even facing like how balance the necessary military benefit that we need to survive with the area swell day I risk mmm I'm very uncomfortable about this hey the game of chess is not merely an idle amusement several very valuable qualities of the mind useful in the course of human life are to be acquired and strengthened by it so as to become habits ready on all occasions for life is a kind of chess thanks Benjamin Franklin hmm he's on my underwear actually I'm wearing Benjamin Franklin underwear oh it's true that's good to know yeah it's a hundred dollar bill underwear but hmm okay so this no means that artists provide +1 n I research I don't have any artists do I know there are people who can be put in temples and ah it's a temple actually didn't I okay now we can build a data center it's the future future it became the future yeah Jonathan Coulton was right okay so so they're you with deep blue is that it captures society's mind so it's not just AI researchers that are now able to contribute to this technological progress lots of other people can suddenly become part of it right it's becomes more of an interdisciplinary thing yes almost a social movement usually and in the midst of all of this oh hell apparently is breaking loose or involve us debris I feel like it's gonna ha you all right hey he has changed his facial expression yeah he looks a lot he looks about the same angry it's just a bit different this looks like a different kind of angry now look something I don't know he looks like you just spilled his pint yeah this is everybody looks very well your big [ __ ] and there is a unprotected general outside London that the tank can just steam all over he never was very bright was he Montezuma I spent all this time building missile cruises and tanks and things and didn't get to use them and now I get to use them bombers awaken I feel bad for the house Tech's man I feel embarrassed on their behalf Iqbal was body nothing I'm cool I'm gonna assume that he's the Star Wars guy the fish one come back with bomas is like very effective but actually that fun piss off Montezuma I don't I don't I don't understand Montezuma at all how much is this war gonna have to cost him before he realized this it's stupid cool house takes a little just going on but not something any jets I guess leads on television yeah and they're not gonna get vision either cuz I'm gonna blow the crap out of everything before it gets close he's like yeah I'm annoyed I'm annoyed because I'm trying to do a nice thing here spending a lot of resources on air safety research I'm trying to create utopia and this guy who still has a skull for a hat is coming at me with I think it's just so those on his head world war two he's got a golden skull on him man no yes yes this is very much a tricycle whilst the will to fight yeah what would it take no okay just it's very very determined to be stupid and hurt himself so I feel like AI safety-wise hmm this has been relatively uneventful yeah because no one else is gone what else is going for that going for that wind condition how are we doing we're still more than double yeah so we actually can be quite safe to just max out our AI labs yeah I guess in this scenario it turns out that alignment is not so hard yes opening I will I'm in medieval yeah yeah I mean I guess if you have if you have full control of the project yeah there's only one person working on it and you haven't really kind of started racing until you've discovered Northey thesis and you've put a bunch of people on the problem to begin with yeah they could just come up with a sensible solution that would be nice for that'd be lovely yeah something is world willing but kind of actually there are worlds like that I've got atomic theory yeah cool don't need it you're AI researchers have reached the level of expertise the new AI driven abundance increases happiness in each city by to meet that's good Wow we're doing really well and it's just strange that we're at war with it created what is that oh it's uranium yeah it just became uranium yeah we never cared about it before so we didn't notice that the ground was gleich that's exactly right okay you know you only see things once you know they're the other yeah no that's true I do feel bad at talking double-o do only breathing oh right in a bridge in Dublin yeah we're gonna liberate this out of Dublin ah thank us in the future when you're no longer a puppet if ridiculous Dalton skull hat man I feel bad though I do suicide is it such a sadness your move Montezuma it's just been nuked it must be so pissed yeah fine idiot I can't get over how dummy is your eye our research is one of the risk of rogue superintelligence is becoming higher consider dedicating more resources to our safety to prevent catastrophe well that's because we got 250 okay we're still about W I'm comfortable with that okay 602 now we are steadily advancing we are extremely happy Wow Nikki efficient fine I'm more about fusion personally in this beautiful oh it is fusion yeah I read fission yeah yeah your vision for like two years now okay good you're discovering these things one of you now that's pretty crazy we heavily research oriented civilization that is true this is Britain we care about two things AI research and bombers and those horse guys so you're making so much research now might just be the last time oh my gosh look yeah do it what's happening on I already promised it yeah no one else is everyone okay fine ah well so it turns out that's how you avoid an AI race yes you have no one pursued other than you yeah perfect just make sure everyone's incredibly militaristic I can start researching future tech yeah you're done you done with the research tree much left for the AI to do for you yeah yeah I guess you just became a tech speed speed down the tech tree use that to gain follow up and that you're going down the tech tree yeah everyone else decided to go mean turistica it backfired because we had some priority yeah so uh good wouldn't be so easy up because it passed 500 is that yeah but we're nearly there isn't it weird that putting people in the Opera House is better for AI research than putting them in the university I mean have you seen the kind of Whistler gets on your bills like this no comment coming in with tanks now yeah busy busy anyway no concern of Falls no we're just like double checking our code yeah you have achieved victory through mastery of science you have conquered the mysteries of nature and ushered in a technology that makes utopia Within Reach your triumph will be remembered as long as the stars burn in the night sky hurrah everything is wonderful and the Sun never sets on an artificial intelligence powered British Empire everything is beautiful nothing hurts and also for the best in this best of all possible worlds [Music] this video took a lot more editing work than my usual videos in part because it's a lot longer and also because I started with about 9 hours of gameplay footage I had to get some new equipment to make it all work so I want to thank my excellent patreon supporters all of these people here for making it possible in this video I'm especially thanking Steve thank you Steve I hope you enjoyed the little behind-the-scenes video I made about how this one was put together and I've also uploaded the full like 8 hour long version if you want to watch that anyway thank you again and I'll see you next time you think you can make anything video
a28b1a53-8bf9-46e8-8559-fcc6b119ae80
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"At least three people have died playing online games for days without rest. People have lost their spouses, jobs, and children to World of Warcraft. If people have the right to play video games - and it's hard to imagine a more fundamental right - then the market is going to respond by supplying the most engaging video games that can be sold, to the point that exceptionally engaged consumers are removed from the gene pool. How does a consumer product become so involving that, after 57 hours of using the product, the consumer would rather use the product for one more hour than eat or sleep? (I suppose one could argue that the consumer makes a rational decision that they'd rather play Starcraft for the next hour than live out the rest of their lives, but let's just not go there. Please.)A candy bar is a superstimulus: it contains more concentrated sugar, salt, and fat than anything that exists in the ancestral environment. A candy bar matches taste buds that evolved in a hunter-gatherer environment, but it matches those taste buds much more strongly than anything that actually existed in the hunter-gatherer environment. The signal that once reliably correlated to healthy food has been hijacked, blotted out with a point in tastespace that wasn't in the training dataset - an impossibly distant outlier on the old ancestral graphs. Tastiness, formerly representing the evolutionarily identified correlates of healthiness, has been reverse-engineered and perfectly matched with an artificial substance. Unfortunately there's no equally powerful market incentive to make the resulting food item as healthy as it is tasty. We can't taste healthfulness, after all. The now-famous Dove Evolution video shows the painstaking construction of another superstimulus: an ordinary woman transformed by makeup, careful photography, and finally extensive Photoshopping, into a billboard model - a beauty impossible, unmatchable by human women in the unretouched real world. Actual women are killing themselves (e.g. supermodels using cocaine to keep their weight down) to keep up with competitors that literally don't exist. And likewise, a video game can be so much more engaging than mere reality, even through a simple computer monitor, that someone will play it without food or sleep until they literally die. I don't know all the tricks used in video games, but I can guess some of them - challenges poised at the critical point between ease and impossibility, intermittent reinforcement, feedback showing an ever-increasing score, social involvement in massively multiplayer games. Is there a limit to the market incentive to make video games more engaging? You might hope there'd be no incentive past the point where the players lose their jobs; after all, they must be able to pay their subscription fee. This would imply a "sweet spot" for the addictiveness of games, where the mode of the bell curve is having fun, and only a few unfortunate souls on the tail become addicted to the point of losing their jobs. As of 2007, playing World of Warcraft for 58 hours straight until you literally die is still the exception rather than the rule. But video game manufacturers compete against each other, and if you can make your game 5% more addictive, you may be able to steal 50% of your competitor's customers. You can see how this problem could get a lot worse. If people have the right to be tempted - and that's what free will is all about - the market is going to respond by supplying as much temptation as can be sold. The incentive is to make your stimuli 5% more tempting than those of your current leading competitors. This continues well beyond the point where the stimuli become ancestrally anomalous superstimuli. Consider how our standards of product-selling feminine beauty have changed since the advertisements of the 1950s. And as candy bars demonstrate, the market incentive also continues well beyond the point where the superstimulus begins wreaking collateral damage on the consumer. So why don't we just say no? A key assumption of free-market economics is that, in the absence of force and fraud, people can always refuse to engage in a harmful transaction. (To the extent this is true, a free market would be, not merely the best policy on the whole, but a policy with few or no downsides.) An organism that regularly passes up food will die, as some video game players found out the hard way. But, on some occasions in the ancestral environment, a typically beneficial (and therefore tempting) act may in fact be harmful. Humans, as organisms, have an unusually strong ability to perceive these special cases using abstract thought. On the other hand we also tend to imagine lots of special-case consequences that don't exist, like ancestor spirits commanding us not to eat perfectly good rabbits. Evolution seems to have struck a compromise, or perhaps just aggregated new systems on top of old. Homo sapiens are still tempted by food, but our oversized prefrontal cortices give us a limited ability to resist temptation. Not unlimited ability - our ancestors with too much willpower probably starved themselves to sacrifice to the gods, or failed to commit adultery one too many times. The video game players who died must have exercised willpower (in some sense) to keep playing for so long without food or sleep; the evolutionary hazard of self-control. Resisting any temptation takes conscious expenditure of an exhaustible supply of mental energy. It is not in fact true that we can "just say no" - not just say no, without cost to ourselves. Even humans who won the birth lottery for willpower or foresightfulness still pay a price to resist temptation. The price is just more easily paid. Our limited willpower evolved to deal with ancestral temptations; it may not operate well against enticements beyond anything known to hunter-gatherers. Even where we successfully resist a superstimulus, it seems plausible that the effort required would deplete willpower much faster than resisting ancestral temptations. Is public display of superstimuli a negative externality, even to the people who say no? Should we ban chocolate cookie ads, or storefronts that openly say "Ice Cream"? Just because a problem exists doesn't show (without further justification and a substantial burden of proof) that the government can fix it. The regulator's career incentive does not focus on products that combine low-grade consumer harm with addictive superstimuli; it focuses on products with failure modes spectacular enough to get into the newspaper. Conversely, just because the government may not be able to fix something, doesn't mean it isn't going wrong. I leave you with a final argument from fictional evidence: Simon Funk's online novel After Life depicts (among other plot points) the planned extermination of biological Homo sapiens - not by marching robot armies, but by artificial children that are much cuter and sweeter and more fun to raise than real children. Perhaps the demographic collapse of advanced societies happens because the market supplies ever-more-tempting alternatives to having children, while the attractiveness of changing diapers remains constant over time. Where are the advertising billboards that say "BREED"? Who will pay professional image consultants to make arguing with sullen teenagers seem more alluring than a vacation in Tahiti? "In the end," Simon Funk wrote, "the human species was simply marketed out of existence."" The above is a chunk of text from a post created by a user on the website LessWrong. Can you elaborate on the potential societal and psychological impacts of video game addiction, particularly in relation to the concept of superstimuli and the evolutionary aspects you mentioned? Video game addiction, like any form of addiction, can have profound societal and psychological impacts. It's important to understand that video games, particularly the most addictive ones, are designed to tap into our brain's reward system, providing us with a constant stream of achievements and positive reinforcement. This can lead to a form of dependency where the individual prioritizes the game over other aspects of their life, such as work, school, relationships, and even basic self-care. From a societal perspective, this can lead to a number of issues. If a significant portion of the population is spending excessive amounts of time playing video games, productivity can decrease. This could potentially impact the economy, particularly if those affected are in their prime working years. It can also lead to breakdowns in relationships and families, as the addicted individual withdraws from social interactions to spend more time gaming. On a psychological level, video game addiction can lead to a variety of mental health issues. These can include depression, anxiety, and social phobia, as the individual becomes more isolated and less engaged with the real world. It can also contribute to physical health problems, such as obesity and sleep disorders, due to a lack of physical activity and irregular sleep patterns. The concept of superstimuli is particularly relevant here. Video games are a form of superstimulus, providing an experience that is more intense and rewarding than anything we might encounter in our everyday lives. This is what makes them so engaging and potentially addictive. Our brains have evolved to seek out rewarding experiences, but they are not equipped to handle the intensity of the rewards provided by video games. This can lead to a form of overstimulation, where the individual becomes desensitized to other, less intense forms of reward. In terms of evolutionary aspects, our brains have not yet adapted to the rapid changes in technology that have occurred over the past few decades. We are still wired for a hunter-gatherer lifestyle, where immediate rewards were rare and had to be worked for. The instant gratification provided by video games is something our brains are not equipped to handle, leading to potential addiction. In conclusion, while video games can be a source of entertainment and even a tool for learning and development, their potential to act as superstimuli and lead to addiction should not be overlooked. It's important for individuals to be aware of these risks and for society as a whole to consider how to address this growing issue. Considering the psychological and societal impacts of video game addiction and the concept of superstimuli, how might advancements in virtual reality technology exacerbate or mitigate these issues? Additionally, propose potential measures that could be implemented to prevent addiction while maintaining the engaging nature of these games. Virtual reality (VR) technology represents a significant advancement in the gaming industry, offering an immersive experience that is even more engaging than traditional video games. This can potentially exacerbate the issues associated with video game addiction and superstimuli. In a VR environment, the line between the virtual and real world can become blurred, potentially leading to a stronger sense of escapism and a greater risk of addiction. The immersive nature of VR could make the superstimuli even more potent, providing a level of stimulation that is far beyond anything that can be experienced in the real world. On the other hand, VR also has the potential to mitigate some of these issues. For example, VR can be used to create realistic simulations of real-world activities, providing a form of 'virtual practice' that can be beneficial in various fields such as medical training, education, and mental health therapy. This could potentially shift the focus of gaming from pure entertainment to more productive and beneficial activities. In terms of measures to prevent addiction while maintaining the engaging nature of video games, there are several strategies that could be implemented. One approach could be to incorporate elements of self-regulation into the games themselves. For example, games could be designed to encourage regular breaks, or to limit the amount of time that can be spent playing in a single session. Another strategy could be to promote education and awareness about the risks of video game addiction. This could involve providing information about the signs of addiction, as well as resources for getting help. It's also important to promote a balanced lifestyle, emphasizing the importance of physical activity, social interaction, and other non-gaming activities. Finally, there could be a role for regulation in this area. This could involve setting standards for game design to prevent the creation of excessively addictive games, or implementing age restrictions to protect younger players. However, any regulatory measures would need to be carefully considered to avoid stifling innovation in the gaming industry. In conclusion, while VR and other advancements in gaming technology present new challenges in terms of addiction and superstimuli, they also offer new opportunities for creating engaging and beneficial gaming experiences. It's important to strike a balance between these two aspects, ensuring that games are enjoyable and immersive, but also safe and healthy.
7d6119f8-a4f3-490f-9338-38503eae1f7e
trentmkelly/LessWrong-43k
LessWrong
Help create an instrumental rationality "stack ranking"? I recently heard about SIAI's Rationality Minicamp and thought it sounded cool, but for logistical/expense reasons I won't be going to one.  There are probably lots of people who are interested in improving their instrumental rationality, know about and like LessWrong, but haven't read the vast majority of content because there is just so much material, and the practical payoff is uncertain.  It would be cool if it was much easier for people to find the highest ROI material on LessWrong. My rough idea for how this new instrumental rationality tool might work:   * It starts off as a simple wiki focused on instrumental rationality. People only add things to the wiki (often just links to existing LessWrong articles) if they have tried them and found them very useful for achieving their goals. * People are encouraged to add "exercises" that help you develop the skill represented by the article, of the type that are presumably done at the Rationality Minicamps. * Only people who have tried the specific thing in question should add comments about their experiences with it. * Long Term Goal: Every LessWrong user can define their own private stack rank of the most important concepts/techniques/habits for instrumental rationality. These stack ranks are globally merged by some LessWrong software to create an overall stack rank of the highest ROI ideas/behaviors/techniques as judged by the LessWrong community at any given time. People looking to improve their instrumental rationality can then just visit this global stack rank and pick the highest item that they haven't tried yet to experiment with, and work backwards from there if there are any prerequisites.   Do you think others would find this useful? Anyone have suggested improvements?
fb7450df-f6b5-43ea-af3e-d4e755ae64ea
trentmkelly/LessWrong-43k
LessWrong
Planning for Extreme AI Risks This post should not be taken as a polished recommendation to AI companies and instead should be treated as an informal summary of a worldview. The content is inspired by conversations with a large number of people, so I cannot take credit for any of these ideas. For a summary of this post, see the thread on X. One of the timelines I explore in this post. Many people write opinions about how to handle advanced AI, which can be considered “plans.”         There’s the “stop AI now plan.”   On the other side of the aisle, there’s the “build AI faster plan.”           Some plans try to strike a balance with an idyllic governance regime.   And others have a “race sometimes, pause sometimes, it will be a dumpster-fire” vibe.     So why am I proposing another plan? Existing plans provide a nice patchwork of the options available, but I’m not satisfied with any. Some plans describe what the world would do if everyone were on the ball. Others are ruthlessly practical, to the point where they dismiss potentially essential interventions like international governance. And still others include a hodgepodge of interventions that seem promising at the moment but might not hold up. This plan is comparatively wide-ranging, concise, and selective in its claims. The plan is for a responsible AI developer, which I call “Magma.” It does not directly apply to governments, non-profits, or external researchers. Magma has the sole objective of minimizing risks on the scale of AI or human ‘takeover’ – even if advancing this goal requires dangerous actions. This is a dangerous plan for a dangerous world. The plan includes two parts: 1. Goals and strategies for achieving them. 2. Prioritization heuristics that specify how the developer might allocate their limited resources between goals over time. The plan is quite different from an “RSP” or a “safety framework.” It is not meant to be enforceable or allow an AI company to transparently justify its actions.
869ab175-e03b-4d39-a95f-dfd0cf66f786
StampyAI/alignment-research-dataset/blogs
Blogs
2014-15 New Year review #### 2014 progress If someone told me at the beginning of 2014 that I would co-found an organization to mitigate technological risks to humanity, I might not have believed them. Thanks Max, Meia, Anthony and Jaan for the great initiative! I am almost done with my first research project on variable selection and classification using a Bayesian forest model – I simplified the variable partition in the model, came up with better tree updates, added a hyperprior, sped up the algorithm by an order of magnitude, and started testing on real data. Among the other ambitious projects of the past year are two MIRIx workshops plus writing up the results, and starting this blog. Improvements in personal effectiveness: * started using a daily checklist of morning habits * started taking melatonin every night * started tagging new thoughts * started using FollowUpThen to schedule future tasks without overloading my todo list * started using Toggl to track work hours only * stopped using Beeminder (too stressful), and replaced it with a combination of 42goals and FollowUpThen (works well) * quit as President of the Toastmasters club * made a volunteer application form for FLI, so that instead of being inundated with 7 freeform emails per month from interested folks, I get the relevant information in an organized spreadsheet and I’m not required to respond Random cool things I did: * climbing a 5.11a * climbing outdoors * indoor surfing * polar bear swim #### 2014 resolutions A year ago, I made a number of New Year resolutions (and assigned a probability of completion for each goal). Here is how they worked out: Succeeded: * continue meditation practice (>10 minutes daily, > 120 times) (70%) – did ~200 times * start a new research project (70%) – started working with a new advisor, did some background reading, narrowed down a topic, wrote a grant proposal * do at least 5 pullups in a row (85%) * reading stats blogs * go to 3 conferences, including a MIRI workshop (80%) – went to NESS, JSM, NIPS, and two MIRIx workshops * give 5 speeches at Toastmasters (75%) * give at least 5 LessWrong meetup talks (70%) * run comfort zone expansion outings at least twice (80%) * start a group project at Citadel (40%) – FLI work groups and MIRIx workshops sort of count for this * introduce at least 3 friends to LW meetups (50%) * help people achieve their goals – helped run a weekly habit training session at Citadel Essentially succeeded: * publish paper about current research project (90%) – almost done, hope to submit by March 1 * write at least 5 LW posts (80%) – wrote 4 posts * more writing – reflections (did some), stories (sort of), poems (nope), also journals and blog posts Failed and no longer endorsed: * continue to avoid Beeminder debt – didn’t work, then stopped using Beeminder, now use 42goals for goal tracking and FollowUpThen for reminders * do consulting for Metamed (60%) * read 90% of the LW Sequences (70%) – made progress, but no longer want to read such a high fraction (waiting for the ebook to come out) * finish Pearl’s Causality (50%) – read the LW review of the book instead * learn more economics and biology Calibration: * low predictions, 40-60%: 2/4 = 50% (perfectly calibrated) * high predictions, 70-90%: 7/10 = 70% (overconfident) Conclusions: * Reading goals mostly don’t work for me – if I do set them, flexible goals of the form “read some of X” (like stats blogs) do better than more fixed and time-consuming goals like “read most of X” (like the Sequences) * The rate of goal disendorsement is 5/19 = 26%. * I don’t tend to completely fail on goals I continue to endorse – yay! #### 2015 goals and habits Broad categories of goals for the coming year: 1. Research: * wrap up and submit BFC project, * start and make a significant progress on 1-2 new projects 2. FLI: * help streamline operations and communication, * continue work on AI safety outreach to AI researchers, * encourage AI safety research, * start projects in risk areas other than AI safety 3. Self-improvement: * reduce weekend/free-time anxiety, * increase acceptance of suboptimal situations, * improve introspection ability and my model of myself, * improve retention of information (using Anki), * eliminate cluttery speech pattern Habits to maintain: * daily meditation * daily pushups * daily melatonin * tagging and writing down new thoughts * tracking goals in 42goals * tracking work hours in Toggl * journaling 1-2 times / week #### 2015 predictions 1. I will submit the Bayesian Forest Classifier paper for publication (95%) 2. I will submit another paper besides BFC (40%) 3. I will present BFC results at a conference (JSM, ICML or NIPS) (40%) 4. I will get a new external fellowship to replace my expiring NSERC fellowship (50%) 5. I will skim at least 20 research papers in machine learning (70%) 6. I will write at least 12 blog posts (70%) 7. I will climb a 5.12 without rope cheating (50%) 8. I will lead climb a 5.11a (50%) 9. I will be able to do 10 pullups in a row (60%) 10. I will meditate at least 150 times (80%) 11. I will record at least 150 new thoughts (70%) 12. I will make at least 100 Anki cards by the end of the year (70%) 13. I will read at least 10 books (60%) 14. I will attend Burning Man (90%) 15. Boston will have a second rationalist house by the end of the year (30%) 16. FLI will hire a full-time project manager or administrator (80%) 17. FLI will start a project on biotech safety (70%) I will update this post if other goals and predictions for the year come to mind before the end of January.
3083adb6-f2f2-4929-956c-4393f0273701
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Knowledge-based Treatment of Human-Automation Systems 1  Abstract —In a supervisory control system the human agent’s knowledge of past, current, and future system behavior is critical for system performance. Being able to reason about that knowledge in a precise and structured manner is central to effective system design. I n this paper we introduce the application of a well - established formal approach to reasoning about knowledge to the modeling and analysis of complex human - automation systems. An intuitive notion of knowledge in human -automation systems is sketched and then cast as a formal model. We present a case study in which the approach is used to model and reason about a classic problem from the human -automation systems literature; the results of our analysis provide evidence for the validity and value of reasoning ab out complex systems in terms of the knowledge of the system’s agents. To conclude, we discuss research directions that will extend this approach, and note several systems in the aviation and human -robot team domains that are of particular interest. Index Terms —Formal specifications, knowledge in multi -agent systems, man -machine systems. I. INTRODUCTION human operator’s knowledge regarding a complex system’s past, current, and future behavior is elemental to system performance. Indeed, issues of the existe nce, content, and validity of operator knowledge in a complex human -automation system underlie many major threads of research in the human -machine systems community. These research threads include broad and active domains such as situation awareness (brief ly, an operator’s knowledge of the relevant elements in the environment), mode awareness (operator knowledge of a system’s current A Knowledge -based Treatment of Human - Automation Systems Yoram Moses and Marcia K. Shamo A 2 operational state), and mental models (knowledge and beliefs regarding a system’s components, possible behaviors, and interde pendencies). In a basic sense, all these domains are concerned with aspects of the operator’s knowledge and aim to define both theoretical frameworks and evaluative criteria for determining whether that knowledge is satisfactory for system performance. Since human operator knowledge is a fundamental element of system performance, we argue that it is important to reason about that knowledge, and its role in controlling system performance, in a precise and structured manner. Moreover, as increasingly complex human -automation and human -robot systems are developed, the need for tools that support the design and evaluation of these systems from the perspective of the agents' knowledge becomes an imperative. Consider, for instance, the knowledge possessed by the h uman and automation agents in advanced flight deck information systems, in systems of human supervisory controllers and multiple autonomous vehicles, or in robot -assisted search and rescue teams. Sophisticated tools are required to effectively analyze the complex and subtle interactions between what the human and non -human agents know in these and similar systems. To be useful, a reasoning approach must satisfy a number of modeling and analysis requirements. In particular, knowledge in a system must be a we ll-defined entity or resource that can be directly represented, manipulated, and analyzed. Properties of knowledge unique to human agents must be expressible. Since our focus is the correct performance of the complete system, and since non -human agents pla y equally important albeit different roles, we will in many cases need to be able to capture and reason about the 'knowledge' of the non -human agents in the system. Finally, since systems are dynamic, it will be necessary to represent the evolution of know ledge over time, and how that knowledge both influences and is influenced by system behavior. In this paper we introduce a formal approach to reasoning about knowledge in human -automation systems that is based on a model of knowledge developed by Halpern and Moses and their colleagues [1]-[3]. Their methodology combines the formal rigor of epistemic logic with intuitive and direct notions of knowledge and action in multi - agent systems, and thus provides a means of reasoning about systems at the level of the agents' knowledge. To date, existing applications of the approach focus primarily on systems in which all agents are non -human (e.g., communication protocols for distributed computer systems [1], [4] , robot motion planning [5], adding notions of knowledge and communication to discrete event control systems [6], [7] ). We argue that this formalism can be equally valuable for reasonin g about knowledge in systems in which one or more agents are human, if properties of knowledge that are unique to human agents can be e xpressively captured. 3 The value of reasoning about knowledge in the analysis of socio -technical systems has already been noted in the literature [8], [9]. In the papers cited, the authors discuss an approach combini ng knowledge, timed automata and Activity Theory, and propose a case study of an aviation accident. From the brief description presented, it appears that their work focuses on the temporal aspect and does not model knowledge explicitly. Our approach makes use of a fine -grained analysis of knowledge based on a careful model of the different entities' local states, along the lines of Fagin et al. [1]. This enables us to establish rigorous claims regarding the roles of knowledge, and the lack of knowledge, in human -automation interaction and supervisory control of complex systems. Our contribution is thus to propose a cohesive formal framework for the modeling and analysis of knowledge in these sy stems. In this article, we present the theoretical foundations of our approach, and sketch a case study. The article is structured as follows. Within the domain of complex human -automation systems in which the human’s role is that of supervisory controlle r, we first define more precisely a notion of human knowledge in a complex system. We also put forth basic criteria for what it means for a human operator's knowledge to be satisfactory for system performance. We then describe the main concepts and element s of the knowledge formalism as developed by Fagin et al., and present a variation that provides a more expressive language for modeling human -automation systems. Following, we consider a case study in which our formalism is used to model and reason about the Therac -25 device problem, a classic problem in the human -automation systems literature [10]. We provide a brief overview of the relevant aspects of the problem, and construct a knowledge -based model. We then use the model to evaluate the knowledge in the system against formally defined criteria, and show how a solution can be derived and then validated. To conclude, we present our views regarding the potential value of the approach for reasoning about knowledge in a variety of human -automation and human -robot systems, and outline a number of research directions that will expand the value of the approach as a means of modeling complex systems. II. HUMAN KNOWLEDGE IN CO MPLEX SYSTEMS Human knowledge is a significant and intriguing topic for study in many domains. Philosophers from ancient times have considered fundamental questions such as the nature of knowledge and the origin of its existence, and began the development of logics and formal modeling in order to more precisely reason about these complex and often subtle concepts [11]. In more contemporary times, scholars in fields such as Artificial Int elligence and Cognitive Science have variously interpreted and re -interpreted the concept of knowledge in order to explore their 4 particular domains, focusing for example on notions such as knowledge representation and knowledge -based agents that are fundam ental to AI. Valuable outgrowths of work in these and other important fields include both strong support for the critical value of knowledge as a formal construct, and richly descriptive tools for precisely describing and reasoning about knowledge [12]-[15]. Our own goal is to draw on this formal approach as a means of reasoning precisely about complex human - automation systems. More specifically, we aim to use this well -structured notion of knowledge to model and analyze the design of complex systems for which the human’s knowledge is required for supervisory control. The ability to describe in e xact terms what a human operator must know in order to control a system, coupled with the ability to describe, in like terms, what that human can be expected to know as a function of the interface, offers a rigorous approach to the analysis of complex huma n-automation system design efficacy. A. A Notion of Operator Knowledge So that we may reason formally about operator knowledge in the context of complex systems, we first need to conceptualize operator knowledge in supervisory control in a manner that will be amenable to capture within a formal framework. In this section we d iscuss types of knowledge that an operator may need to control a complex system, and then examine several additional characteristics of operator knowledge that should also be captured in our framework. 1) Knowledge T ypes The concept of knowledge in the literature is often partitioned and variously classified, with each research domain categorizing knowledge in the way that is most suited for its theories and constructs. Since our focus at present is the knowledge of a human supervisor y controller within the context of a complex system, we need to cast our notion of knowledge accordingly. What is the knowledge that is required for supervisory control? Sheridan notes that to function as a supervisory controller, the human agent must be able to command the system in accordance with defined specifications, monitor the system’s behavior to ensure that it performs as required, and identify and correct anomalous system behaviors [16], [17]. It is straightforward to interpret these supervisory control tasks in terms of knowledge: the effective human operator needs both knowledge of the current state and behaviors of the system, and knowledge of the rules defined by the system specifi cations. To represent these knowledge requirements correctly, our approach will need to capture the knowledge of current system state available from the automated system’s interface (e.g. the displays) as well as the knowledge that the human operator 5 is expected to possess regarding the rules of possible and acceptable system behavior . a) Explicit knowledge The information available from the human -automation interface at any given point in time is the source of the operator’s explicit knowledge.1 We wish to s ay, for example, that a competent human driver explicitly knows that the current speed of the car she is driving is 40 miles per hour since that value is displayed on the speedometer. While the relationship between displayed information and useful knowledg e may depend on non -trivial factors such as the agent correctly perceiving the display and understanding its meaning, the compatibility of the display format to the task, and so forth [18], we assume here that the display type, quality, and information saliency are adequate for the task at hand . Explicit knowledge is needed for supervisory control tasks such as monitoring system health and status; it is also necessary for tasks such as synchronizing the actions of interacting or otherwise interdependent system components (e.g., not opening a tank cover until the internal pressure of the tank has dropped below some pre-defined level). b) Mental model knowledge A second type of knowledge needed for human -supervisory control is the 'knowledge -in-the-head' that the human agent possesses regarding the global behaviors and properties of the physical system with which he is interacting [19]-[21]. This mental model knowledge can be thought of as a collection of inferences that the operator k nows to make about the system. Mental model knowledge is required both for reasoning ab out and interpreting current interface information as well as for future -oriented supervisory control tasks such as planning and scheduling, troubleshooting, and decision -making [16]-[18]. We assume that the operator’s mental model knowledge also includes what we shall call ground knowledge – essential domain -relevant knowledge that the human agent may be assumed to possess, such as basic log ical and computational skills. We suggest that this knowledge is ‘automatic’ in nature and thus requires almost no effortful thinking or reasoning action [18], [22]. For instance, it seems correct to assume that a human agent who is sufficiently skilled to act as supervisory controller of a complex system, should be able to automatically compare two displayed integers and determine which of them is greater. Mental model knowledge is normally based in fact (e.g. through training). However, since it evolves over time as a result of repeated interactions, mental model knowledge may grow to i nclude some proportion of unproven (and 1 It is worth mentioning that the human -automation interface need not be visual; our intent is any interface that provides the operator with information relevant to system behavior . 6 perhaps incorrect) beliefs [23]. For any typically -sized complex system the human operator’s mental model knowledge is clearly incomplete, as the number of potential system behaviors under all possible conditions is too large to be known. 2) Characteristics of Human K nowledge In order to develop a representation that allows us to reason accurately as well as expressively about human supervisory control knowledge within the context of complex systems, we need to consider several additional aspects or characte ristics of operator knowledge. a) Reasoning and knowledge In a practical sense, what the operator knows at any given moment is the result of interpreting the explicit knowledge available using the mental model knowledge. Precisely how humans reason, the rule s of inference that guide their reasoning activities, or indeed whether humans use any form of mental logic at all are on -going and strongly debated questions in the literature [24]. The goal of this paper is not to add to that debate, nor to make claims regarding the actual processes by which h umans reason, or the specific rules of logical inference they may use. To capture a notion of reasoning useful for our purposes, we will assume that the human supervisory controller has the ability to reach at least a particular set of conclusions that are generated by a small set of deductions that are part o f her mental model. For example, consider that at a given moment in time the driver agent explicitly knows that the allowable speed is 25 mph (she just passed the speed limit sign), and that her current actual speed is 40 mph as displayed on the speedometer. Further, assume that the set of facts that comprise her mental model includes the inference that an actual speed greater than an allowable speed may result in a speeding ticket. The inference will then allow the human agent to deduce the (quite important) conclusion that she is currently in danger of receiving a ticket. b) Human computational ability A related characteristic of human reasoning ability and the knowledge that results is the question of the operator’s computational power – even if the operator only reaches conclusions based on the inferences in her mental model, how many conclusions can she reach in a bounded period of time? If a c onclusion requires conclusions from other inferences as inputs or antecedents, how long can this chain of inferences be before the human operator is overwhelmed? As research shows, human computational power is quite limited [18], [22], [25] and even more so in 7 the time - or safety -critical environments within which most complex systems function [26]. In order to capture some reasonable facsimile of the knowledge of a supervisory controller, our approach will have to limit the number of inferences that the operator can make at a given moment. B. When Does An Operator Know Enough? If our aim is to provide a practically useful methodology and tool set for the design and analysis of complex systems from the perspective of agent knowledge, w e must provide not only a formal means of describing knowledge in these systems, but also formal metrics against which knowledge in systems can be rigorously evaluated. Though the human agent's knowledge of a complex system's possible behavior is incomplet e by definition due to the size of the state space, we propose that at a minimum, the human agent’s knowledge must be sound and adequate . We describe these properties next, and note that if they hold, we shall say that the human agent's knowledge is satisf actory . 1) Soundness We will say that the human agent's knowledge is sound at a given system state if the facts that the agent is said to know and the inferences that the agent can make there are true. As we discuss in following sections, soundness is a fundamental property of knowledge that is formalized as the Knowledge Axiom , stating that only what is true can be known [1]. A formal definition of knowledge for human agents is sound if the Knowledge Axiom holds at all stat es of the system of interest. 2) Adequacy While an agent’s knowledge in a complex system will necessarily be incomplete, the knowledge must be adequate ; in particular, it must guarantee her ability to identify anomalous or ' illegal' behaviors or states (i.e. not in accordance with system specifications). In other words, though the agent will not know, a priori , all the possible behaviors that a system may exhibit, she must always be able to determine when the current system behavior is 'bad'. Having discussed basic notions related to human knowledge in supervisory control, we next provide a brief introduction to the basic elements of the knowledge formalism in order to recast our model. III. REASONING ABOUT KNOWL EDGE – AN INTRODUC TION A. Elements of the Framework The knowledge formalism was originally adapted to distributed and multi -agent systems by Halpern and Moses 8 and was expanded on in numerous writings [1], [2], [5], [27] -[30]. It includes structures to represent the entities or agents in the system and the knowledge they can be said to possess. The formalism also includes a means for describing the interaction of knowledge and action and the behavior of the system. It thus provides a rigorous methodology for reasoning about k nowledge in a dynamic multi -agent system, and for analyzing and proving properties of that knowledge. Here we focus on the elements of the formalism that are relevant to the present work, and describe how knowledge is defined using these elements. The prim ary re ferences for this part are [1]-[3]. Once the foundations of the formalism have been presented, we int roduce a number of concepts that will enable more expressive representation of knowledge properties of human agents. 1) Agents The formalism considers agents and systems of multiple interacting agents, where an agent might be a robot, a processor, a human, a physical object, or any other entity of interest in the system. The approach includes the external environment as an agent, albeit a special type of agent, whose behavior is not under the control of the other agents in the system. In general, the environme nt’s role is to represent all that is relevant to the system that is not captured by the other system agents. In this paper we focus on systems involving a single human agent h, (intuitively, this is the operator), a single “automaton ” agent a – the complex system that h operates, and the environment agent e. In modeling the speedy driver, for example, we could capture the driver as human agent h, the car as the automaton agent a, and the outside world (including, for instance, traffic lights a nd whether or not there is a policeman watching) as the environment e. 2) Local states, global states, and agent knowledge When reasoning about knowledge, we are typically interested in modeling a dynamic situation in which the world evolves, and with it the state of knowledge of the agents. In this section we consider a natural way to model these. We think of every agent at any given instant as being in a well -defined local state . The local state in our setting captures all of the information that is availab le to the agent at a given instant. This is the information available to the agent when it determines its next action. The precise structure and contents of this local state depends on the system that is being modeled. A global state corresponds to a snaps hot of the state of the agents in the world frozen at an instant. In our three - agent setting, a global state would be modeled by a triple ( se, sh, sa), where si is agent i’s local state, for i=e,h,a . Intuitively, an agent’s local state captures exactly wha t is visible to the agent at the current point - the agent’s 9 knowledge depends solely on its local state. The agent is able to distinguish between two global states exactly if its local state in one is different from its local state in the other. For examp le, consider a car in which there is an indicator light for low brake fluid. As long as the indicator light is off, the driver cannot distinguish whether or not the brake fluid is leaking. All the driver knows is that the fluid level is not below the criti cal level. Indeed, if the indicator is not fully reliable, then the driver cannot distinguish a situation in which the indicator is working and the fluid is sufficient, from one in which the fluid level is critical but the indicator is malfunctioning. The formalism thus allows us to accurately model and reason about situations in which agents have only partial knowledge of system behavior. The state se is called the local state of the environment, and it accounts for all else that is relevant to the analysi s, possibly keeping track of aspects of the world that are not part of any agent’s local state (e.g., a traffic light). 3) Dynamic system behavior The evolution of a world over time produces a history, which in our terminology will be called a run. Formally, a run r is a function from time to global states, assigning to every time instant m a global state r(m). If r(m) = (se, sh, sa), then we denote by ri(m) the local state si, for i=h,a,e . It is often convenient to identify time with the natural numbers, in which case the run is identified with the sequence r = r(0), r(1), . . .. Facts, represented in our framework by formulas of L, are considered true or false at a point (r,m), consisting of a run r at a time m. The truth of non -epistemic facts (ones that do not involve knowledge) at (r,m) can usually be determined based on the global state r(m) at that point. We choose to model the types of human -automation systems that we are interested in as consisting of t hree agents; the human agent h, the automation agent a, and the environment agent e. With this construction we can focus on the human ‘knower’ h, and capture how h’s knowledge influences and is influenced by the behavior of the complete system. Thus, at an y given time each agent i, for i = e, h, a, will be in a local state li, representing all the information that i has available at that time. We denote by Li the set of i's possible local states. As in the general case noted above, the tuple of all the agen ts’ local states is a global state g. The set G of all global states is then the Cartesian product G = Le x Lh x La of the sets of local states. In a given application, there can be dependencies between elements in the agents’ local states (e.g. shared data) and the set of global states that will appear in a relevant set of runs R will usually be a subset of G. 10 The dy namic behavior of the system is represented by the set of runs R is generated by the joint actions of the environment, automation, and human agent s. 4) The logical language We need a language in order to rigorously reason about knowledge in complex systems, and so now formally define a logical language for this purpose. In any given application, there are a (typically small) number of basic facts of interest. These can be the status of the traffic light, the speed of the car, the direction it is driven at, etc . We call such basic facts primitive propositions . In each application or system being considered, we assume that a set  of the relevant primitive propositions is given. These will be the basic formulas of our logical language. We create more complex form ulas inductively by applying logical connectives to simple formulas. Formally, we will be working with a logical language L , which is the set of formulas defined as follows  Every primitive proposition p   is a formula,  If  and  are formulas then so are   (standing for not )     (standing for  and ), and  Ki (standing for agent i  {h,a } knows ). We define only the two Boolean operators  and  because all other Boolean operators are definable using these two [31]. (For example,    corresponding to “ implies ”, can be defined as shorthand for (  ).) The language L allows us to express rich and complex statements such as   Kh   which reads that “if  is true and agent h does not know , then  must be the case. ” Such a formula would be true, for example, if  stands for the brake fluid level being critical, and  stands for the fact that the indicator is malfunctioning. In fact , while we will not expand into a discussion of one agent’s knowledge of another agent’s knowledge in this paper, our language allows us to talk about what the human knows about the automaton's knowledge, what the automaton knows about the human's knowledg e, etc. The propositions p,q,…  and indeed all formulas of L are initially strings that we may intend to ascribe meaning to. We consider that the set of propositions  represents the basic facts about the system that may be derived from the design specif ication. The truth of a primitive proposition p   is defined by way of an 11 interpretation  that maps every  and g to True or False. Thus, if (p)(g) =True then proposition p is true at the global state g. Knowledge changes over time as an agent learns new facts and forgets some that are known. Thus a formula is considered true or false at a time m in a history (or run) r in a system or model R. We can think of a world as consisting of a tuple ( R,,r,m) wher e R is a system,  is the interpretation for the propositions of  at states of R, r is a history (or run) in R, and m is the point in time at which we are evaluating truth of formulas. The pair ( r,m) is a point, and if r is in R, then it is a point of R. We denote the fact that a formula  is true, or satisfied at a world (R,,r,m) by ( R,,r,m) |=  . The satisfaction relation |= is formally defined by induction on the structure of . Primitive propositions p   form the base of the induction, and their truth is determined according to the interpretation : (R,,r,m) |= p (for a primitive proposition p  ) iff (p)(r(m)) = True . (1) Negations and conjunctions are handled in the standard way: (R,,r,m) |= ¬ iff (R,,r,m) |= . (2) (R,,r,m) |=   ’ iff both ( R,,r,m) |=  and ( R,,r,m) |= ’. (3) For a formula Ki , the clause ( R,,r,m) |= Ki states that Ki (i knows ) is satisfied at the point ( r,m) in system R. We will now discuss when this may hold. 5) Knowledge as truth in all possible worlds The formal definition of knowledge is traditionally framed within the notion of possible worlds [13], [14] . Two points (r,m) and (r’,m’ ) are considered to be indistinguishable for agent i, which we denote by (r,m) ~i(r’,m’ ), if i has the same local state in the global state r(m) as in r'(m'). Intuitively, if (r,m) ~i(r’,m’ ) then at (r,m) agent i can equally imagine that the actual state is ( r’,m’ ). Hence (r’,m’ ) is considered a possible world for i at (r,m). The possible -worlds definition of knowledge states that an agent knows a fact precisely if this fact is true in every state or world that the agent considers possible. If two points ( r,m) and ( r’,m’ ) are indistinguishable to an agent, and the fact  is false at ( r’,m’ ), then at ( r,m) the agent cannot be said to know that  is true. More formally , (R,,r,m) |= Ki holds iff (R,,r',m' ) |=  holds for all points ( r',m') with r’R (4) 12 that satisfy ( r,m) ~i(r',m'). Observe that the four clauses (1)–(4) above defining the satisfaction relation ` |=' enable us to determine, for every formula L and every world (R,,r,m), whether  is or is not satisfied at the world. This definition of knowledge is a powerful and useful tool for the description and analysis of properties of multi - agent systems, and enables formal proof of agent knowledge or, alternately, the impossibility of that knowledge. However, several fundamental properties of the approach make it problematic for modeling human agents in human - automation systems. We briefly mention these properties here prior to discussing the modifications to the defini tion that will allow a more appropriate and expressive representation of the knowledge of human agents (cf. [13]). 6) Logical omniscience One of the most important difficulties with applying the traditional possible -worlds definit ion of knowledge to human agents is that it implies a notion of logical omniscience . Intuitively, logical omniscience refers to the fact that if an agent knows a set of facts it will also know all logical consequences of that set. In our context, logical omniscience suggests that in any state an agent will know all the facts that follow from the information available in that state. That is, the agent’s knowledge in this case is closed under logical inference. Though this may be a fair assumption for an agen t with a powerful capacity for reasoning, the typical human agent clearly does not fall in to this category.2 The knowledge that can be ascribed to a human agent using the traditional model is thus far greater than what the agent can actually be expected to know. In order to better fit the task of analyzing systems with human agents, our modifications must provid e a more realistic measure of the human agent’s limited ability to infer and to reason, and the limitations on human knowledge that result. 7) Representing false knowledge As we have seen, the traditional possible -worlds definition of knowledge entails the Kn owledge Axiom, stating that an agent knows only true facts , Ki   . In the case of an analysis of automated agents for whom knowledge is in any event an externally ascribed notion this does not appear to be troublesome. For human agents, however, false knowledge is a common state of affairs and a particularly relevant issue in human -automation interaction – we would wish to capture, for instance, aspects of the interface design that mislead the human operator into holding false beliefs. 2 This point is most succinctly emphasized by Kono lige [37] who notes that if humans were capable of unlimited reasoning, a chess player would know the outcome of a chess match immediately following the first move. 13 IV. DEFINING BOUNDED KNOWLEDGE IN A COMPL EX HUMAN -AUTOMATION SYSTEM A. An Appropriate 3-Tiered Syntactic Model of Human Knowledge We now propose a formalism in which the assumptions about the computational aspects and contents of an agent’s knowledge are limited, in order to mo re appropriately represent the bounded character of human knowledge. Rather than using possible -worlds semantics as the primary component, we will present a syntactic model in which the known facts will be obtained based on a restricted amount of reasoning , and in which what is boundedly known may not be true (for related approaches to syntactic knowledge see [13], [32]). We model the human's knowledge as a three -tiered structure . Tier 1 - Explicit knowledge At the base is the agent's explicit knowledge, which we intuitively think of as the raw information available from the system interface: clock reading s, indicators, aural warnings, and so forth . Tier 2 - Automatic knowledge Based on the agent's explicit knowledge, we allow a set of ground (or “automatic ”) conclusions, in which various propositions that may not be explicitly represented in the state are immediately observed. For example, the local state may contain two 5 -digit numbers, and the agent may be expected to know immediately which of them is larger . Similarly, a driver in a residential zone that sees the odometer reading 70 mph (as part of his explicit knowledge) would automatically know that “I am speeding ” is true. Tier 3 – Bounded Knowledge via deductions On top of the explicit knowledge and the ground conclusions it affords, we think of the agent as being able to perform a finite set of deductions . Each of these deductions will use the fact that a set of propositions appear at the ground and explicit knowledge tiers, to conclude that the agent knows a formula of interest. This is a strictly limited process however, in that knowledge that is deduced at this level cannot be used for further rounds of deduction. We now present the formal definition of this syntactic framework. Firs t, we define the logical language L+, extend ing the language L by adding a “bounded knowledge ” operator ˆ hK :  All formulas of L are formulas of L+,  if  is a formula of L+ then so is ˆ hK φ (thus, L+ is closed under ˆ hK ), and 14  L+ is closed under , , and Ki. Next, if  is the set of propositions, we define the set c =   {p: p} containing all pr opositions and their negations. We model the human agent's explicit knowledge by way of a function fh: Lh  c2 specifying which elements of the set c the agent explicitly knows to be true at any given local state lh Lh of the human agent . Observe that explicit knowledge will satisfy the Knowledge Axiom only if the function fh ensures that all explicit knowledge is true. The agent's ground knowledge, which we think of as being derived automatically from its explicit knowledge, is captured by way of a function ah: c2 c2 that, when applied to a subset T  c produces another such subset ah(T) = T’. In our setting, the function ah will be applied to sets T describing the agent's explicit knowledge. For ease of exposition, we think of the ground knowledge as extending the explicit knowledge, so that we formally require that ah be monotonic, so that T ah(T). Finally, we consider a deduction phase determined by a set Dh (typically finite) of implications of the form p1 p2… pk  ˆ hK, where pi  c for i=1,2,…,k. We denote by dh(T) the application of the implications in Dh to a subset T c of propositional formulas . Formally, dh (T) = { ˆ hK p |p T} U { ˆ hK | (p1 p2… pk  ˆ hK)  Dh and p1, p2,…,pk  T}. In words, the agent's bounded knowledge is obtained from applying a fixed set of deductions to its explicit and its ground knowledge. The latter, in turn is based on the explicitly available knowledge consisting, for example, of data presented on the man -machine interface in a control room. The fact that the deduction phase consists of a single step in which a finite number of deductions are applied enforces the boundedness of this type of knowledge, avoiding forms of logical omniscience. Putting these ele ments together, we define the knowledge of the human agent h by way of an epistemic setup h = (fh, ah, Dh). We think of the epistemic setup as encoding a 3 -step process by which h’s structured knowledge is obtained as a function of its local state l: Step 1. The function fh is applied to the local state to obtain the agent's explicit knowledge. 15 Step 2. The function ah is applied to the result to account for the immediate conclusions that the agent draws automatically from its explicit knowledge. Step 3. The set of inferences available in the agent's mental model, captured by the Dh component , is applied once using the function dh and generat es the agent's knowledge at l. Formally, applying h to a local state l yields the set h(l) = dh(ah(fh(l))) consisting of knowledge formulas ˆ hK  that the human agent is taken to know at local state lh. An example will help to clarify. Let lh be the local state of a human agent h piloting an aircraft, and let time instant m at r(m) be the final approach just prior to landing. Among the many indicators in the cockpit is an indicator for the status of the flaps (trailing edges of the wings – full flaps is typically the normal status for optimized landing safety and efficiency) and an indicat or for the status of the landing gear, which may be up, transitioning, or down and locked. The pilot’s local state lh at any time will consist of the status of these two indicators; for instance lh = ((flaps.not.full, landing.gear.up) describes the pilot’ s local state when the flaps are not at full extension and the landing gear is folded up into the airplane. In this local state the set of propositions that are true – the pilot’s explicit knowledge – are that the flaps are not fully extended and that the wheels are up. We’ll call these propositions p1 and p2, respectively. By applying the automatic ground knowledge function ah on this set of propositions the pilot also knows that the airplane is not configured for landing, and this immediate conclusion, call it p3, is added to the set of propositions that the pilot knows. For the third step in the process, we’ll assume that a t rained pilot’s mental model includes an inference d that captures the rule ‘ if on final approach then full flaps and landing gear down and locked ’. Applying this inference to the set of propositions p1, p2, and p3 for time instant m at r(m) = final approa ch generates the pilot agent’s knowledge of the formula  at local state l where  is ‘ on final approach and aircraft not configured for final approach so go -around (return to altitude in order to safely configure aircraft for landing) ’. Thus, ˆ hK  describes the pilot’s bounded knowledge at l. B. Knowledge of Bad System Behavior A supervisory controller must at all times know whether the behavior of the system is within specified and acceptable bounds. In addition to system –specific knowledge formulas that will be part of the human agent’s 16 epistemic setup h, the agent is expected to know, at each local state l in which the current global state is not in accordance with system specifications, that there is a problem.. We def ine pbad as a proposition standing for ‘ the current behavior of the system is not acceptable ’ where the notion of “acceptable ” would depend on the application. Moreover, we require that ˆ hK pbad be included in the set h(l) at any st ate of the system that is not guaranteed to be within acceptable bounds. In the example above, if we assume there is nothing that precludes the go -around maneuver then the current behavior of the system is acceptable and the formula ˆ hK pbad, is not part of the pilot’s knowledge at this local state l. Alternately, if the pilot in this situation cannot execute the go -around maneuver then a bad state exists and ˆ hK pbad would be included in the pilot’s local state. At present we make the simplifying assumption that the human knows all the formulas in her epistemic setup h(l) from the instant that she is in l, and do not address the (more realistic) possibility that draw ing even a simple set of inferences may require some time. While we can easily extend our approach to capture this distinction, the resulting definitions are somewhat cumbersome, and are beyond the scope of this paper. (See [33] for definitions of knowledge that take computational c omplexity into account.) C. The Epistemic System Model The mathematical model in which we can both define truth of formulas and generate the human agent’s knowledge will be an epistemic system E = (R, , h), where R is the set of possible runs,  is the interpretation of primitive propositions at the global states of R, and h is the epistemic setup of the human agent as described above. Truth of formulas in L + can now be defined with respect to E = (R, , h). At a given time m in a run r, we de fine: (E,r,m ) |= p (for p) iff (p)(r(m)) = True (E,r,m ) |=    iff both (E,r,m ) |=  and (E,r,m ) |=  (E,r,m ) |=  iff it is not the case that (E,r,m ) |=  (E,r,m ) |= ˆ hK iff ˆ hK  h(rh(m)) Finally , since the system R and the interpretation  are components of the epistemic system E= (R, , h), we can also define the truth of standard (possible -worlds) knowledge with respect to E. For i=a,h,e: (E,r,m ) |= Ki iff (E,r’,m’ ) |=  for all points (r’,m’) ~i(r,m) 17 The latter clause will enable us to consider possible -worlds knowledge and bounded knowledge within the same framework. Recall that ˆ hK represents a syntactic, rather than semantic, notion of knowledge. That is, what the human agent “knows ” in each local state is a set of knowledge formulas that is determined by the epistemic setup h.. There is no a priori guarantee that this knowledge i s true, or even consistent. Given an epistemic system E, we write E |=  (and say that  is valid , or tautologically true in E ) if (E,r,m ) |=  holds for all points ( r,m) of R. D. Criteria for Satisfactory Bounded Knowledge We now formalize the soundness and adequacy properties to enable the analysis of whether or not the human agent's bounded knowledge in an epistemic system E = (R, , h) is satisfactory for supervisory control . 1) Soundness We say that an epistemic system E = (R, , h) is sound if the fo llowing two statements are true: 1. For every ( r, m) with r  R, if q  ah(fh(rh(m))) then ( E,r,m) |= q. In words, for every point ( r,m) in a run of R, if q is in the human agent’s explicit or automatic knowledge, then q is true at ( r,m) in the epistemic system E. Moreover, 2. E |= (p1  p2 … pk)   holds for every implication p1 … pk  ˆ hK in Dh. Intuitively, soundness guarantees that E |= ˆ hK   holds, meaning that every conclusion that the human agent draws using its mental model is true at every point (r,m) in the epistemic system E. We will prove this fact formally in Corollary 1 below. The importance of soundness in this context is that it ensures that what the human agent is said to know according to ˆ hK is in fact true. If the epistemic system E does not satisfy soundness , so that E |≠ ˆ hK  , then there is one point at which there is at least one formula  known to the agent according to the epistemic system E is false. In this case, either an element of the agent’s explicit or automatic knowledge is incorrect, or an implication in Dh is incorrect (possibly both) . The notion of soundness allows us to consider situation s in which the knowledge available for supervisory control is incorrect . The faulty indicator light for low brake fluid mentioned previously is an example – if the indicator is malfunctioning and providing the explicit information ‘brake fluid indicator light off’, the driver may boundedly 18 know that the brake fluid is sufficie nt while in fact it is critically low. This is in comparison with a lack of required knowledge which is addressed by the adequacy criterion we consider next . 2) Adequacy Though a human operator's knowledge of a complex system is incomplete by definition, we w ill say that the human’s knowledge is adequate for supervisory control only if the operator can always recognize unsafe system states and avoid anomalous system states and behaviors .3 That is, the agent must be able to use her own local state to determine whether the current state is acceptable. For example, assume that a well -intentioned but somewhat confused cost-cutting designer concludes that since wheels are either up or down, having multiple indicators for wheel position is wasteful and that one ‘whe els up’ indicator should be sufficient. At any given time, the epistemic setup of the pilot of this low -cost airplane is comprised solely of fh(l) = pup: ‘wheels are up’ or its negation pup: ‘not (wheels are up)’, or more familiarly, ‘wheels are not up’. Landing of course is allowed only if wheels have completed transitioning and are locked in the 'down' position. However, the pilot's display does not allow her distinguish between the wheels being in transition, pup, and the wheels being locked in the down position and so the information available is not adequate for her to determine whether indeed the aircraft is configured for landing. To formalize adequacy in epistemic systems, recall that we defined pbad as a prop osition standing for ‘the current state of the system is not acceptable.’ Moreover, we required that ˆ hK pbad hold at all points at which the global state is not in accordance with the system specification. We then say that an episte mic system E is adequate if the following hold: 1. ˆ hK pbad  h((rh(m)) for all states of the system such that (E,r,m ) |= pbad , thus pbad  ˆ hK pbad is valid in E. Soundness must also hold so that the operator’s bounded knowledge of a bad state is true: 2. E is sound, so that, in particular, E |= ˆ hK pbad  pbad E. Human Knowledge and Possible Worlds Knowledge Our syntactic representation of bounded human knowledge via an epistemic system E makes no assumptions regarding logical omniscience, consistency of reasoning, or the actual truth value of the human agent’s knowledge. By using this approach we are able to capture an expressive and reasonable description of a system with human 3 We refer here to significantly anomalous states and behaviors that impact overall perform ance or safety, and not the transient anomalies that are automatically corrected and are part of any complex system. 19 agents while at the same time we avoid the problems inherent in the application of the possible -worlds definition. However, is there a price to be paid for this useful tool set? Is it less formal, less precise, or does it limit the conclusions that can be reached regarding what the human agent knows? While additional work is needed to fully determine the limitations of our framework, we do claim that our approach precludes any arbitrar y or specious claims to agent knowledge. As the following theorem and corollaries show, any proposition  that is known by human agent h in a sound system E is also known when knowledge is defined as truth in all possible worlds: Theorem 1 : Let E = (R, , h) be an epistemic system and let L +. If E is sound , then E |= ˆ hK  Kh  . In words, this theorem states that any proposition  that the human agent (boundedly) knows in a sound epistemic system E is also known by that agent in the possible -worlds sense. Proof: Suppose that E = (R, ,h) is sound. We need to show that if (E,r,m ) |= ˆ hK then (E,r,m ) |= Kh  holds as well. By definition, (E,r,m ) |= Kh  holds if (E,r’,m’) |=  holds at all points (r’,m’) satisfying (r’,m’) ~h(r,m). So let (E,r,m ) |= ˆ hK and (r’,m’) ~h(r,m). Denoting l= rh(m), we have that r’h(m’)= l as well . By definition of (E,r,m ) |= ˆ hK  we have that ˆ hK   h(l) = dh(ah(fh(l))). By definition of h and dh this means that either (a)  = p ah(fh(l)) or (b) there is an implication p1 … pk  ˆ hK in Dh with pi  ah(fh(l)) every i=1,…,k. In case (a) we have by clause (1) of soundness that (E,r’,m’) |=  . In case (b) we have by clause (2) of soundness that E |= (p1  p2 … pk)  . Since we also have by clause (1) that (E,r’,m’) |= pi for every i, it follows that (E,r’,m’) |= p1  p2 … pk, and thus (E,r’,m’) |= , as desired. QED Since (r,m)~h(r,m) for all points ( r,m), by definition, we have that standard (possible -worlds) knowledge satisfies the Knowledge Axiom (or knowledge property) E |= Kh   . Given Theorem 1, it immediately follows that so does human agent knowledge: Corollary 1: If E is sound, then E |= ˆ hK   holds f or all L +. 20 Corollary 1 says that when a system E is sound , then every formula  in L + that the human agent boundedly knows to be true according to E is indeed true. A second corollary consists of the equivalent, contrapositive , form of Theorem 1: Corollary 2 : If E is sound then E |=  Kh    ˆ hK . In other words, if human agent h does not know a proposition  in a system ( R, ) according to the possible worlds definition of knowledge, then h will not know  in any sound epistemic system E extending the system ( R, ). The relationship between knowl edge in an epistemic system and the classical possible worlds interpretation of knowledge demonstrate s that our approach is fundamentally grounded , with Theorem 1 and its corollaries collectively providing the formal support . Assume, for instance, that th ere is an epistemic system E that models the knowledge of the agents in a complex human -automation system S. In E there is a local state l of the human agent at which h boundedly knows a proposition q. According to Theorem 1, if E is sound , then h must also know q when at l according to the possible worlds approach. Thus if E is sound , any proposition that the human agent knows in our modified approach is indeed true. This relationship between the two approaches allows us also to demonstrate when the des ign of a given system R make s satisfactory knowledge impossible . If for example we show that for a state in R satisfying pbad the human agent does not know pbad at that state, then by Corollary 2 no epistemic system for R can be adequate and sound, and the design of R is fundamentally flawed. The design must be changed - perhaps by adding explicit information to the human agent states - before an adequate epistemic setting can be obtained. Epistemic systems are a framework for reasoning about knowledge in complex human -automation systems that allows us to expressively describe the knowledge a human agent has available for supervisory control. At the same time, the framework allows us to formally determine if that knowledge enables effective system performan ce. In the remainder of the paper we discuss a representative case study in which our approach is used to model and analyze the classic Therac -25 problem [10]. We use the formalism to describe the knowledge available to the human operator as a function of the system’s design, to prove that the existing design precludes effective supervisory 21 control, and to identify the knowledge that is lacking. Following, we show that once missing knowledge is made available (by a change in the interface in this case), effective supervisory control is enabled. In this manner we establish the viability, value, an d correctness of thinking formally about human -automation systems at the level of the knowledge in the system, and what agents know. V. THE THERAC -25 DEVICE PROBLEM  OVERVIEW The case study is presented at two levels of detail. The first , immediately following, is a high -level overview discussion that aims to enhance an intuitive understanding of our approach and its value for reasoning about complex human -automation systems. The second , included in the Appendix, is a complete formal representation of the problem and the solution. A. Problem Description “Between June 1985 and January 1987, a computer -controlled radiation therapy machine, called the Therac -25, massively overdosed six people. These accidents have been described as the worst in the [then] 35 -year history of medical accelerators” [10]. During investigation of the Therac -25 failures, a common factor in two of the accidents was the device’s operator. In both cases, the o perator had started and completed the patient data entry task, and then had gone back to edit one or more values before starting the treatment. Analysis showed that the operator’s data entry speed was the cause of the machine’s behavior – while editing da ta was an allowable function, the operator was able to edit the data and return to a ‘data entry complete’ state so quickly that the machine did not record, and hence was never aware, that the data editing had occurred. The operator then initiated treatme nt believing that dosage would be in accordance with the newly edited parameter values displayed on the interface, while the actual dosage was given to the patient according to the original data values. Several flaws in the system’s design were implicated in the accidents. The faulty design introduced a system state during which the treatment values could be edited, and the machine activated, without the new values actually being processed. Furthermore, the interface provided almost no information to the op erator about the internal behavior of the machine, and the operator thus had no way of knowing whether the machine had accepted her data editing. 22 B. Modeling the System In order to reason more clearly about the problem and potential solutions we first need to generate a more formal representation of the significant elements of the Therac -25 problem. 1) Agents There are two agents that must be included in the model, the human operator h, and the Therac -25 device, denoted a. (Reference to the environment is omit ted in this example, as the device is deterministic and so the environment plays no essential role.) 2) Local states The human operator’s local states in this example capture the information that is available via the Therac -25 device’s interface. For instanc e, the operator’s local states will include a variable representing the status of data entry or modification, and a variable for the ‘ready -to-treat’ status of the device , as presented to the operator by the interface . The machine’s local states consist of variables indicating whether there is complete data entry, whether the entered data has been modified, and whether the system is in treat mode. 3) Actions and joint actions In each state, the operator h and the machine a perform actions that may change the g lobal state. The operator of the Therac -25 device can input data, modify the data entered, initiate treatment, or do nothing. The device can process data that has been entered, treat, or do nothing. 4) Dynamic system behavior We represent the dynamic behavio r of the combined operator -device system by defining the possible transitions between global states as a function of the operator’s and the device’s joint actions. For example, consider the global state g0 as the initial state in which the operator has not yet entered any treatment data. A possible joint action in this state would be for the operator to enter treatment data and the Therac device to do nothing. This joint action would cause the system to transition to a global state g1 in which the treatment data has been entered but not yet processed. By defining all possible transitions between states we can obtain a complete model of the system’s set of possible behaviors (which will form our set of possible runs R). A crucial task in modeling the behavior of the system is to identify behaviors that are unacceptably anomalous, so that the set of points ( r,m) in R at which pbad is true may be clearly defined. 23 5) Modeling the human’s knowledge The next step in development of the model is to define and formally represent the operator’s knowledge in each of the states of the system; this is the knowledge, the epistemic setup h, that the operator has in order to control the system. We start by ge nerating the set of primitive propositions or facts that are relevant to the complete system’s dynamic behavior. For instance, facts such as ‘no treatment data is entered’ and ‘device is ready to treat’ would be part of this set. The truth value of these f acts at a given time will be a function of the actions of the operator and the machine, and the physical dynamics of the complete system. Thus, for example, the proposition ‘no treatment data is entered’ will be true only if the operator has not yet entere d any treatment data. We then add new propositions of ground or automatic knowledge such as ‘data is entered accurately’. As the final step in modeling the operator’s bounded knowledge we define the implications that would comprise the op erator’s mental model knowledge used to interpret the explicit knowledge in her local state. The Therac operator’s mental model might include the implication ‘if data entry is complete and the device is ready to treat and the entered data is correct then t reatment can be initiated’; in a local state in which the propositions ‘data entry is complete’, ‘the device is ready to treat’, and ‘entered data is correct’ are true, the operator would know that treatment can be initiated. Critically, the operator’s me ntal model must also include the implications that enable knowledge of anomalous system behaviors, so that the agent knows when pbad is true. C. Satisfactory Knowledge and Possible -Worlds Knowledge in the Therac -25 Problem Once the system’s behavior and the o perator’s bounded knowledge have been modeled , we use our definitions of soundness and adequacy to analyze the system and demonstrate that the Therac -25 operator’s bounded knowledge as modeled is not satisfactory for supervisory control. Examining the mode l within the possible worlds framework shows further that the design of the Therac -25 is flawed and so renders satisfactory knowledge impossible. To begin, recall that the operator of the Therac device boundedly knows that treatment data values can be modi fied prior to initiating the patient’s treatment. We assume that in a given run r the operator modifies previously entered data, the device processes the data and displays its ‘ready to treat’ status, and the operator initiates treatment. In this run r the device functions as expected and treats the patient in accordance with the modified data; the operator knows the system behavior is acceptable. 24 Now we consider another run r’ in which the operator’s actions are the same as in run r but the device does not process the new data prior to treatment being initiated. Thus pbad holds once treatment is initiated. Since the operator’s local states in r’ are the same as in r she will have the same knowledge as in r, and so boundedly knows that the system behavior is acceptable. There is no indication in her local states that once treatment is initiated, pbad holds.  In both runs r and r’, at the point m at which treatment data has been edited , the proposition p = ‘device is ready to treat’ is part of the operator’s explicit knowledge, ( E,r,m ) |= ˆ hK p and (E,r’,m’ ) |= ˆ hK p . However, since in run r’ at (r’,m) the edited data has not yet been processed by the device then p is false in (r’,m’ ), as is the operator’s knowledge of p. Soundness, a necessary property of satisfactory knowledge, therefore does not hold.  In run r’, though pbad holds after initiation of treatment, pbad is not in the operator’s local state and so adequacy does not hold.  While at point ( r,m) treatment data has been edited and processed by the device , there is another point (r',m') at which the human operator has the same state as at (r,m), so that (r,m)~h(r’,m’ ), but at which the new data has not been processed by the device. Thus , while the operator boundedly knows p at both (r,m) and at (r’,m’ ), p is true at one point and false at another. Since ( r,m)~h(r’,m’ ), Khp (stated in terms of possible -world knowledge) does not hold. Corollary 2 implies that no epistemic system for this device can be adequate: there is guaranteed to be a bad state that the operator wi ll be unable to detect as such. The impossibility of satisfactory knowledge for supervisory control of the Therac -25 system is thus shown. This is discussed further and formally proved in the Appendix . D. Problem Resolution In addition to supporting a fine -grained analysis of a complex huma n-automation system, our approach provides clear direction for problem resolution. In the case of the Therac -25 problem, the knowledge -based model and analysis suggests the need for a design modification that provides the operator with the information need ed to truly 25 know that the device has processed the operator’s edited data values and so is indeed ready to treat .4 This modification would allow the operator to easily determine the true ‘ready to treat’ status of the device and so should preclude the poss ibility of initiating treatment with incorrect treatment data. E. Supporting Design Modification Via Bounded Knowledge It remains to demonstrate, using our formal framework, that this modification does indeed result in a system in which the knowledge availab le is satisfactory for supervisory control. Again assume that there is a state g in which treatment data has been edited and processed by the device, and a state g’ in which treatment data has been edited but not processed by the device. Let Em be the epistemic system that models the knowledge of the operator and device in the Therac -25 system modified as described in section 5.4 . In the local state l, the operator’s epistemic setup h(l) = dh(ah(fh(l))) now contains a knowledge formula ˆ hK  representing the operator’s bounded knowledge of the global system’s readiness to treat. For this system Em to be sound, i.e., for what the operator boundedly knows to indeed be true, the operator must know that the system is re ady to treat within the possible worlds framework, as well. It is easy to see that this is so. In any global state g’ in which treatment data had been edited but not yet processed by the device, the modified device would provide an indication to the operat or that the system is not ready to treat. The operator would thus know that the system is ready to treat only in states in which the system truly is ready to treat. That is, in Em the proposition ‘ready to treat’ is true in any state or world in which the operator boundedly knows it to be true. This demonstrates that the suggested modification generates a sound epistemic system Em. Theorem 4 in the appendix states this in more formal terms, and provides proof that the knowledge available with this modifica tion is satisfactory for supervisory control. F. Summary The goal of this section was to provide an initial intuitive example of how our approach may be applied to a real system . Though for most complex systems of interest neither the model nor the solution w ill be nearly as easy to define, th is Therac -25 example shows how a ‘real -life’ system may be modeled, and its human -automation interface 4 We do not suggest that adding display elements for each fact the human needs to know is a desirable or even viable solution o ption. Displ ay optimization for the knowledge that is identified as necessary is (far) outside the scope of this research. 26 design evaluated, using the knowledge of its operator. For the interested reader the complete detailed model and analy sis is included in the Appendix . VI. DISCUSSION AND DIRECTIONS FOR FUTURE RESEARCH In this article we presented an initial attempt to utilize a well -established formal theory of knowledge and action in multi -agent systems in order to conduct a knowledge -based analysis of a complex human -automation system. This approach allows us to reason cleanly and rigorously about the design and p erformance of a system as a function of one of its most fundamental resources – the knowledge of the agents. The Therac -25 analysis revealed the existence of design flaws in the system that precluded the human agent from being an effective supervisory controller, and identified the knowledge that was missing. No additional theories of performance or behavior were required, and the human and automation agents were depicted using an expressive common construct without losing important and unique properties of either of these distinct agents. The example demonstrated that our approach offers a formal, expressive, and parsimonious methodology for the design and analysis of complex systems, and we believe this to be a significant contribution. As an initial step in a new direction, the work discussed here raises many intriguing points for investigation; we briefly discuss several that we believe particularly worthy of further pursuit. A. A Richer Notion of Human Knowledge A first A first direction for further resear ch is to expand on and more completely define a notion of human knowledge within our formal framework. For instance, we may wish to consider a classification of human knowledge in terms of its contents in addition to the type -based taxonomy defined in the current paper, such as the declarative / procedural dimension often used in cognitive modeling [34]-[36]. In this manner we may gain a more multi -dimensional model of human knowledge that would render our approach useful in a wider range of problem domains. Another aspect of human knowledge that is important in supervisory c ontrol is the distinction between knowledge and belief. By capturing the important distinction between a human operator formally knowing a fact p and believing p, we can more precisely investigate the impact of incorrect human belief on system performance. Belief has been formally represented using a number of techniques [30], [37] , and a valuable goa l is to identify and build on the technique most appropriate for modeling a notion of human belief in domains of interest. 27 An additional element of human knowledge and reasoning that is particularly important in the supervisory control context is the conce pt of counterfactual reasoning (‘if p were to hold then q would be true’) [38]. When an operator plans future actions, especially error recovery ac tions, counterfactual reasoning supports the operator’s consideration of conditional alternatives and the outcomes of hypothetical scenarios [39], [40] . The ability to reason about and identify the knowledge needed for effective counterfactual reasoning will be useful in the design of more robust systems that provide the information needed for an operator to successfully 'think through' novel, perhaps safety - critical situations. Preliminary work suggests that incorporating an existing formalization of counterfactual reasoning [27] into our approach w ill provide designers with a means to do so. Once these elements of human knowledge are added to our framework, the approach will support the modeling and analysis of a wider and more realistic set of complex systems. For example, a designer could more a ccurately evaluate the potential failure conditions of supervisory control inherent in a system that executes in a highly ambiguous environment by limiting the human agent's knowledge of important automation and environment agent behaviors. We may be able to capture differences in system performance that result from differences in the amount or quality of knowledge possessed by the human controller. If qualitative differences in knowledge distinguish between novice and expert human operators [41], [42] , for example, this would allow a design er to gauge the vulnerability of the system to novice supervisory control and might make salient required emphases in training. There are, no doubt, many additional aspects of human agent knowledge that are relevant to our problem domain and that are amena ble to formal representation in our framework; it is an interesting extension of our work to identify them. Again, the goal should not be to draw a true and faithful picture of human knowledge, but rather to develop a representation that is epistemically a dequate [43] and that allows us to reason expressively and formally about important properties of human knowledge within the context of complex systems. B. Extend ing the Formal Model Our formalization of human knowledge has drawn on various approaches in epistemic logic to create a framework that is expressive enough to begin capturing the unique properties of human agent knowledge without sacrificing the rigor of formal logic. Our approach can, as well, formally represent different types of reasoning that a human agent might do, perhaps in different circumstances or in different systems. Previously we noted that our current definition of a 'one-shot' round of reaso ning captures an intuitive notion of how a human agent might reason in a supervisory control setting. That is, since complex systems are normally dynamic systems in which control decisions need to be 28 made and implemented in a timely manner, the operator’s reasoning activities cannot require multiple rounds of reasoning. One -shot reasoning captures the resource bounds that would be expected due both to the human's inherently limited reasoning abilities as well as to the requirements of a context in which con trol decisions need to be made and implemented in a timely manner. It will be interesting to expand on this initial work and identify additional patterns of inference that can be used to express human reasoning, the systems in which they may be appropriate , and the means by which they should be formalized. More precise bounds on reasoning may also be identified. For instance, within the one -shot model, how many independent instances of simple deductive inference can a human do? A system's design might requi re a human to infer, in the same round, n separate conclusions. Can this requirement be satisfied by typical human cognitive resources? Can we augment the one -shot model with a multi -step (i.e. algorithmic) representation of more complex human reasoning? W hat might be the natural resource bounds for this type of reasoning actions? C. Knowledge of Groups of Agents in Human -Automation Systems One of the most significant contributions of the knowledge formalism has been its ability to express notions of the knowl edge of groups of agents, such as agents’ knowledge of other agents’ knowledge, distributed knowledge (knowledge is distributed between the agents in the system), and common knowledge ( that is, all the agents know a fact p, and know that all agents know p, and so on…) [3]. This expressiveness supports analysis of centrally important system properties such as the need for an agent to know what another agent knows, the additional knowledge made available by a fact being commonly known, the potential performance cost when system failure precludes a needed fact from being common knowledge, and so forth. The importance of being able to reason formally about the knowledge of groups of agents in the design and analysis of human -automation systems is significant, as noted in [44]-[46]. For a simple example, recall the search - and-rescue robot team mentioned previously. Not only must the designer c onsider what the human and robot agents need to know in any state of the system, but since the robot is normally remotely operated (e.g. it is deep in a collapsed building looking for survivors) the designer must be able to capture what the human and robot agents can know about each other's knowledge at any point in time in order to determine whether the knowledge of the agents will be sufficient for task performance. It is important to explore the knowledge of groups of agents within the conte xt of human -automation systems. Our work will consider these notions both from a conceptual perspective – for instance, what does it really mean to 29 say that human and automation agents have common knowledge of a fact? – and in terms of formal modeling considerations. What are the theoretical issues of shared knowledge relevant to a human -automation system? It is natural to say that an operator may need to know what the automation knows, but when is it useful (and meaningful) to talk about an automation agent knowing w hat a human agent knows or wh at other automation agents know? Considering various forms of human -robot teams, for example, the need for a robot to know what the human agent knows seems clear in the case of search and rescue, personal assistant, or physical therapy robots [47]. Can we define general taxonomies of human -automation and human -robot systems in which specific types of group knowledge a re required for system performance? The knowledge formalism provides a complete formal semantics and structures for the modeling and analysis of various types of group knowledge. We thus have the tools to represent and reason about the knowledge of every agent in the group, EG, distributed knowledge in the group, DG, and common knowledge, CG.,. An important direction for our work is to further investigate the notion of group knowledge for human -automation systems. As systems grow in size (number of agents ), in heterogeneity (types of agents), and in the criticality of the mission, a common concept that binds all agents and that provides a rigorous method for evaluation will become an imperative. We believe this last direction for our research program to be particularly significant. D. Applications The final test of any formal approach for modeling and analysis is its applicability to real -world problems in the intended domains. The Therac -25 device problem discussed in this article served as a benchmark for demonstrating that our approach (1) can expressively capture important properties of human and non -human agents, (2) can enable us to answer queries about what human and non -human agents know and don’t know, and (3) can be used to draw significant conclusi ons regarding a complex system's design in spite of the dissimilarity of its agents. While these results offer an important initial ‘proof -of-concept’, the Therac -25 problem is a cleanly bounded and thoroughly researched scenario with faults in design iden tified a priori . What is the value of our formalism as a modeling and analysis tool for systems that are not as neatly defined or for which answers are not as clear? The state -of-the-art in human -automation system design will provide a valuable test -bed of systems for evaluating our approach ‘in real -life’; we are particularly interested in systems in the aviation and human -robot teams domains. In the aviation domain, current directions in integrated flight deck systems design, aviation information 30 manage ment, and the evolving role of the pilot as supervisory controller underscore the inherent relevance of a knowledge -based approach. Will our formalism be able to provide useful insight regarding the design of these highly complex and sophisticated systems? The application of our formalism to the modeling and analysis of human -robot teams will serve as another challenging and rigorous test of its viability. On the one hand, the importance of capturing notions of agent knowledge and common knowledge, and the need for a formal method for reasoning about knowledge in this domain, have already been noted in the literature [45], [48] , and so the potential value of our approach is clear. On the other hand, these systems have unique properties that make accurate and useful representation exceptionally difficult. A knowledge -based model of a human -robot team will need to capture the (albeit artificial) intelligence of the non -human robotic agents and the often hostile environment within which these systems operate (e.g. the collapsed buildings environment of search and rescue teams) . Too, our approach must be able to represent and reason usefully about the complex and dynamic patterns of communication and knowledge distribution between multiple human and non -human agents that are to be found in this domain. Consider that in a team of humans and unmanned aerial vehicles (UAVs) the multiple UAVs may communicate among themselves to maintain formation and ensure surveillance coverage while information regarding potential targets is transmitted to the human operator. Or the operator may co mmunicate different commands to different UAVs, and receive different responses, which may be a function of the UAV’s reasoning ability and knowledge, rather than just data. How best to capture the role of the knowledge of all the agents in this and simila r systems? What insights can we gain here using our knowledge - based approach? VII. CONCLUSION As noted in the introduction to this work, the critical nature of many human -automation systems impose a clear need for rigorous design and analysis tools that are practically useful. This is well -recognized, and the development of methods and tools has been an active area of research for many years. Unfortunately the highly complex nature of many of the tools too often results in their isolation in academic and scie ntific arenas. Subsequently, practical system design remains to a large extent a somewhat ad hoc process. Towards that end we have introduced and described the initial development of a novel approach to modeling and reasoning about these systems that is in tended to both satisfy the need for formal rigor and be sufficiently intuitive 31 for practical use. One of the most significant aspects of this framework is that agent knowledge is ascribed and analyzed with respect to the automaton representing the complete given human -automation system. Our initial results suggest that reasoning about these systems from the perspective of agent knowledge is in fact a viable and valuable approach. conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. APPENDIX VIII. THE THERAC -25 DEVICE PROBLEM – DETAILED ANALYSIS A. Problem Description “Between June 1985 and January 1987, a computer -controlled radiation therapy machine, called the Therac -25, massively overdosed six people. These accidents have been described as the worst in the [then] 35 -year history of medical accelerators” [10]. During investigation of the Therac -25 failures, a common factor in two of the accidents was the device’s operator. In both cases, the operator had started and completed the patient data entry task, and then had gone back to edit one or more values before starting the treatment. Analysis showed that the operator’s data entry speed was the cause of the machine ’s behavior – while editing data was an allowable function, the operator was able to edit the data and return to a ‘data entry complete’ state so quickly that the machine was never aware that the data editing had occurred. The operator then initiated treat ment believing that dosage would be in accordance with the newly edited parameter values displayed on the interface, while the actual dosage given to the patient was as defined by the original data values. Several flaws in the system’s design were implicat ed in the accidents. The faulty design introduced a system state during which the treatment values could be edited, and the machine activated, without the new values actually being processed. Further, the interface provided almost no information to the ope rator about the internal behavior of the machine, and the operator thus had no way of knowing whether the machine had accepted her data editing. 32 B. Modeling the System 1) Agents The two agents in our model of the system are the human operator, denoted h, and the Therac -25 device, denoted a. (Including the environment in this model does not alter the analysis and so we do not add it here.) 2) Local states The local states of each agent consist of the variables or information the agent has available in that s tate. The human operator’s local states, which capture the information available via the interface, include a variable representing the status of data entry or modification, and a variable for the ‘ready -to-treat’ status of the device. Lh is the set of pos sible states for the human agent which have the form lh = (data.entry, system.status ), where the possible values for each variable are: data.entry  {, data.in, new.data.in}, corresponding to no data entered (i.e. treatment set -up has not yet begun), initial data entry complete, modified data entry complete, respectively. Note that by design the status of data entry is complete only when all required fields are filled and the cursor is in a specified field. This signifies to the system that data entry is c omplete, and normally triggers the system’s processing of the entered data. system.status  {sys.ready.no, sys.ready.yes, treat}, indicating whether the system is not ready to treat, ready to treat, or treating. The machine’s local states consist of varia bles indicating whether there is complete data entry, whether the entered data has been modified, and whether the system is treating. La is the set of possible states for the automation device which have the form la = (data, new.data, status ); the possible values for each variable are data  {, treat.data}, new.data  {new.no, new.yes}, status  {ready.no, ready.yes, treat}. 3) Actions and joint actions The operator of the Therac -25 device can input data, modify the data entered, initiate treatment, or do nothing (a null action, denoted ); ACT h = {input.data , modify.data , press.treat , }. The device can process data, treat, or do nothing; ACT a = {process .data, treat , }. The joint actions of the human and automation agents are thus: 33 (input.data, ) (, process.data) (modify.data, ) (modify.data, process.data) (press.treat, treat) (press.treat, ) (,) 4) Local states and protocols The operator’s local states Lh and the protocol ACT h defining the actions allowable in each local state lh are listed below . The human o perator’s local states are: Lh = {(, sys.ready.no), (data.in, sys.ready.no), (data.in, sys.ready.yes), (new.data.in, sy s.ready.no), (new.data.in, sys.ready.yes), (data.in, treating), (new.data.in, treating)} The o perator’s protocol ACT h is: ACT h(, sys.ready.no) = { input.data }  {} ACT h(data.in, sys.ready.no) = { modify.data }  {} ACT h(data.in,sys.ready.yes) = {press.treat }  {modify.data }  {} ACT h(new.data.in, sys.ready.no) = { modify.data }  {} ACT h(new.data.in, sys.ready.yes) = { press.treat }  {modify.data }  {} ACT h(data.in, treating) =  ACT h(new.data.in, treating) =  The Therac -25 device’s local states La and the protocol ACT h defining the actions allowable in each local state la are listed below . Therac -25 local states: 34 La = {(, new.no, ready.no), (treat.data, new.no, ready.no), (treat.data, new.no, ready.yes), (tr eat.data, new.yes, ready.no), (treat.data, new.yes, ready.yes), (treat.data, new.no, treat), (treat.data, new.yes, treat)} Therac -25 actions ACT a ACT a(, new.no, ready.no) =  ACT a(treat.data, new.no, ready.no) = process.data ACT a(treat.data, new.no, ready.yes) = { treat }  {} ACT a(treat.data, new.yes, ready.no) = process.data ACT a(treat.data, new.yes, ready.yes) = { treat}  {} ACT a(treat.data, new.no, treat) = treat ACT a(treat.data, new.yes, treat) = treat 5) Dynamic system behavior The system’s dynami c behavior is represented as the set of runs R that result from the operator's protocol above and the automaton's protocol, in which the transitions from one global state to the next are in accordance with a transition function ; the complete set of possi ble runs is defined to be all executions of the automaton in Fig. 1. Note that we assume a correctly operating device and so anomalous behaviors that might arise should the device malfunction are not included. We also bound the scope of the model by defining the states in which treatment has been initiated as end states. Finally, in the interest of readability the automaton does not display the self -loops that result from joint null actions ( ,) or from actions that have no effect (e.g. the operator action treat when the machine is ‘not.ready’. ) 35 g0 = ((Æ, sys.ready.no), (Æ, new.no, ready.no)) g1 = ((data.in, sys.ready.no), (treat.data, new.no, ready.no)) g2 = ((data.in, sys.ready.yes), (treat.data, new.no, ready.yes))g4 = ((new.data.in, sys.ready.yes), (treat.data, new.no, ready.yes)) g3 = ((new.data.in, sys.ready.no), (treat.data, new.yes, ready.no)) g6 = ((new.data.in, sys.ready.yes), (treat.data, new.yes, ready.yes))(input.data, ) (modify.data, )(, process.data) (, process.data) (press.treat, treat)(press.treat, treat) g7 = ((new.data.in, treating), (treat.data, new.yes, treat))g5 = ((new.data.in, treating), (treat.data, new.no, treat)) (modify.data, ) g8 = ((data.in, treating), (treat.data, new.no, treat))(press.treat, treat)(modify.data, process.data)(modify.data, process.data) (, treat) (, treat)(, treat) Fig. 1. An automaton describing dynamic behavior of the Therac human -automation system C. Impossibility of a Sound Epistemic Setup for the Therac Example A logical treatment of the Therac example would start defining a set  of primitive propositions or facts that are relevant to the system’s dynamic behavior; For example, we may have  = {p1,…,p6} where p1 = ‘no treatment data is entered’ p2 = ‘initial data entry is complete’ p3 = ‘modified data entry is complete’ p4 = ‘modified data entry has been processed’ p5 = ‘device is ready to treat ’ p6 = ‘device is treating ’  is derived from the Therac 25 system specification; it represents system design assumptions regarding the knowledge an operator is required to have. Recall that, in general, the proposition pbad should capture the statement ‘the current state of the system is not acceptable’. In every particular system, pbad will correspond to a property 36 specific to that system. In the Therac -25 example, we identify pbad with p4  p6 : A state in which the op erator has modified the data and the device is treating but the treatment is not in accordance with the defined values. At any given time in system behavior, the truth value of each proposition in  is well -defined based on its description in words above. This truth value is a function of the agents’ actions and the physical dynamics of the complete system. For instance, the proposition p3 standing for ‘modified data entry complete’ will be true only if the operator has indeed completed data entry and returned the cursor to the correct field; the proposition p4 , ‘modified data entry has been processed ,’ holds only when the Therac device has recognized t hat the data has been modified and has reset its treatment parameters. More specifically, the interpretation  that maps the propositions pi   and the global states g to True or False captures the behavior of the Therac human -automation system as designe d. In this post hoc analysis, we can straightforwardly show that no sound epistemic system exists for the design depicted in Figure 1. Theorem 2: There can be no adequate epistemic setup E for the existing design of the Therac -25 Proof: As suggested in the main text, we prove this result via the connection between possible -worlds knowledge and bounded knowledge. Let R be the set of runs of the automaton depicted in Figure 1. Let rR be a run whose first four global states are g0, g1, g4 and g5, and similarly let r' be a run with prefix g0, g1, g2, g3, g6, g7. Observe that r(4)=g 5, while r'(6)=g 7. In the run r the operator enters initial data, and then enters a modified version of the data, while the device processes the initial data, and signals t hat data entry is complete. Once the system ready indication is displayed the operator initiates treatment. Treatment is not, however, in accordance with the new data and thus(R,r,4)|= pbad . In the run r', the operator enters initial data, this data is pr ocessed by the device, the operator enters modified data, and the new data is also processed. In this case, the processed data coincides with the latest entered data, and so (R,r',6)|= pbad. The crucial point is that (r,4) ~h (r',6). It follows that (R,r,4)|= pbad  Kh pbad . By Corollary 2, we have that (E,r,4)|= pbad   ˆ hK pbad for every epistemic setup E for the system R. It follows that there can be no adequate epistemic setup for R. 37 D. Problem Resolution Our approach not only identifies problems with agent knowledge, but provides insight into problem resolution and the means to determine whether the resolution is adequate . For the Therac -25 system, the modeling and analysis process suggests possible and provably correct solutio ns. In the situation described above it was shown that the operator had no way of determining that the device had processed the data, nor was she aware of the bad state once treatment was initiated prior to data processing . A straightforward design solution would be to modify the interface so that information regarding the device’s data processing status would be always available . For instance, the operator’s modification to the entered data might disable further system action and cause the system re ady indicator to turn red. Only once new data had been completely entered and then processed by the device would the device display a data.ready signal to the control panel, in the operator's local view. In this manner, the human operator would be able to know when treatment could be initiated safely. A sound and adequate epistemic setting for this modified design would have the following features. We add another proposition p7 to , standing for "the value of data.ready is no". The operator's bounded know ledge of system readiness to treat after modification of treatment values would then be based on the explicit information available on the interface display. It is straightforward to see that this change eliminates the design flaw that precluded satisfacto ry operator knowledge. E. Conclusion This Therac -25 problem model and its suggested solution were simplified in order to serve as a straightforward example of how our approach may be applied. Note that multiple iterations of the analysis and solution definiti on process will derive a system design that is provably satisfactory for supervisory control. Though for most complex systems of interest neither the model nor the solution will be nearly as easy to define5, the Therac -25 example has shown how a ‘real -life’ system may be modeled, and its human -automation interface design evaluated, using the knowledge of its operator. 5 We also do not suggest that adding display elements for each fact the human needs to know is a desirable or even viable solut ion option. Displ ay optimization for the knowledge that is identified as necessary is (far) outside the scope of this research. 38 IX. REFERENCES [1] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi, “Knowledge -based programs ”, presented at PODC 95 , Ottawa, 1995. [2] J. Y. Halpern and R. Fagin, “Modelling knowledge and action in distributed systems, ” Distributed Computing , vol. 3, pp. 159 - 179, 1989. [3] J. Y. Halpern and Y. Moses, “Knowledge and common knowledge in a distributed environment., ” Journal of the ACM , vol. 37, pp. 549 -587, 1990. [4] C. Dwork and Y. Moses, “Knowledge and common knowledge in a byzantine environment: crash failures, ” Information and Computation , vol. 88, pp. 156 -186, 1990. [5] R. I. Brafman, J. -C. Latombe, Y. Moses, and Y. Shoham, “Knowledge as a tool in m otion planning under uncertainty, ” presented at T heoretical Aspects of Reasoning about Knowledge: Proc. Fifth Conference , San Francisco, California, 1994. [6] S. L. Ricker and K. Rudie, “Know means no: Incorporating knowledge into discrete -event control system s,” IEEE Transactions on Automatic Control , vol. 45, pp. 1656 -1668, 2000. [7] K. Rudie, S. Lafortune, and F. Lin, “Minimal communication in a distributed discrete -event system, ” IEEE Transactions on Automatic Control , vol. 48, pp. 957 -975, 2003. [8] S. Anderson an d J. K. Filipe, “Guaranteeing temporal validity with a real -time logic of knowledge, ” presented at Proceedings of the 23rd IEEE International Conference on Distributed Computing Systems Workshops , Providence, Rhode Island, 2003. [9] J. K. Filipe, M. Felici, an d S. Anderson, “Timed knowledge -based modelling and analysis: on the dependability of socio - technical systems, ” presented at Proceedings of the 8th International Conference on Human Aspects of Advanced Manufacturing: Agility & Hybrid Automation , Rome, Ital y, 2003. [10] N. Leveson, Safeware: System Safety and Computers : Addison -Wesley, 1995. [11] J. F. Sowa, Knowledge Representation: Logical, Philosophical, and Computational Foundations . Pacific Grove: Brooks/Cole Thomson Learning, 2000. [12] E. Davis, Representations of C ommonsense Knowledge . San Mateo CA: Morgan Kaufmann, 1990. [13] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi, Reasoning about Knowledge . Cambridge, Massachusetts: MIT Press, 2003. [14] J. Hintikka, Knowledge and Belief: An Introduction to the Logic of the Two Notions . Ithaca: Cornell University Press, 1962. [15] R. C. Moore, “A formal theory of knowledge and action, ” in Formal Theories of the Commonsense World , J. R. Hobbs and R. C. Moore, Eds. Norwood New Jersey: Ablex, 1985. [16] T. B. Sheridan, Humans and Automation: System Design and Research Issues . Santa Monica: John Wiley & Sons, Inc., 2002. [17] T. B. Sheridan, “Human supervisory control, ” in Handbook of Systems Engineering and Management , A. P. Sage and W. B. Rouse, Eds. New York: John Wiley & Sons, Inc., 1999. [18] C. D. Wickens and J. G. Hollands, Engineering Psychology and Human Performance , Third Edition ed. Upper Saddle River, New Jersey: Prentice Hall, 2000. [19] J. M. Carroll and J. R. Olson, “Mental models in human -computer interaction, ” in Handbook of Human -Computer Interaction , M. Helander, Ed. Amsterdam: Elsevier, 1988. [20] S. Gentner and A. L. Stevens, Mental Models . NY: Erlbaum, 1983. [21] D. A. Norman, “Some observations on mental models, ” in Mental Models , D. Gentner and A. Stevens, Eds. Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1983. [22] D. Gopher and E. Donchin, “Workload: An examination of the concept, ” in Handbook of Perception and Performance , vol. 2, K. Boff, L. Kauffman, and J. Thomas, Eds. New York: Wile y, 1986, pp. 41 -1 to 41 -49. [23] D. Besnard, D. Greathead, and G. Baxter, “When mental models go wrong: co -occurrences in dynamic, critical systems, ” International Journal of Human -Computer Studies , vol. 60, pp. 117 -128, 2004. [24] L. J. Rips, The Psychology of Proo f: Deductive Reasoning in Human Thinking . Cambridge: MIT Press, 1994. [25] C. D. Wickens, “Processing resources in attention, ” in Varieties of Attention , R. Parasuraman and R. Davies, Eds. New York: Academic, 1984, pp. 63 -98. [26] C. D. Wickens, S. Dixon, and D. Cha ng, “Using interference models to predict performance in a multiple -task UAV environment - 2 UAVs, ” Aviation Human Factors Division Institute of Aviation, Savoy, Illinois, Technical Report AHFD -03- 9/MAAD -03-1, April 2003 2003. [27] J. Y. Halpern and Y. Moses, “Using counterfactuals in knowledge -based programming, ” Distributed Computing , vol. 17, pp. 91 - 106, 2004. [28] Y. Moses, “Resource -bounded knowledge (extended abstract), ” presented at Proc. Second Conference on Theoretical Aspects of Reasoning About Knowledge , San Francisco, Calif., 1988. [29] Y. Moses, “Knowledge and communication (a tutorial). , ” presented at Theoretical Aspects of Reasoning About Knowledge: Proc. Fourth Conference , San Francisco: Calif., 1992. [30] Y. Moses and Y. Shoham, “Belief as defeasible knowledg e,” Artificial Intelligence , vol. 64, pp. 299 -322, 1993. [31] E. Mendelson, Introduction to Mathematical Logic , Fourth Edition ed. Boca Raton: Chapman & Hall, CRC Press LLC, 1997. [32] J. R. Hobbs and R. C. Moore, “Formal Theories of the Commonsense World, ” in Ablex Series in Artificial Intelligence , Ablex, Ed. Norwood, New Jersey: Ablex Publishing Company, 1985. [33] J. Y. Halpern, Y. Moses, and M. Y. Vardi, “Algorithmic knowledge, ” presented at Theoretical Aspects of Reasoning About Knowledge: Proceedings of the Fifth C onference , San Francisco, Calif, 1994. [34] J. R. Anderson and C. Lebiere, The Atomic Components of Thought . Mahwah, NJ: Lawrence Erlbaum Associates, 1998. [35] M. D. Byrne and A. Kirlik, “Using computational cognitive modeling to diagnose possible sources of aviati on error, ” Aviation Human Factors Division, Institute of Aviation, University of Illinois Technical Report AHFD -03-14/NASA -03-4, 2003. [36] R. Stout, J. A. Cannon -Bowers, and E. Salas, “The role of shared mental models in developing team situation awareness: implications for training, ” Training Research Journal , vol. 2, pp. 85 -116, 1996. [37] K. Konolige, “Belief and incompleteness, ” in Formal Theories of the Commonsense World , J. R. Hobbs and R. C. Moore, Eds. Norwood NJ: Ablex, 1985. 39 [38] R. M. J. Bryne, “Mental models and counterfactual thoughts about what might have been, ” Trends in Cognitive Science , vol. 6, pp. 426 -431, 2002. [39] R. M. J. Byrne and S. M. Egan, “Counterfactual and prefactual conditionals, ” Canadian Journal of Experimental Psychology , vol. 58, pp. 113 -120, 2004. [40] V. A. Thompson and R. M. J. Byrne, “Reasoning counterfactually: making inferences about things that didn't happen, ” Journal of Experimental Psychology: Learning, Memory, and Cognition , vol. 28, pp. 1154 -1170, 2002. [41] L. Bainbridge, “ Types of represent ation, ” in Tasks, Errors and Mental Models , L. P. Goodstein, H. B. Anderson, and S. E. Olsen, Eds. London: Taylor and Francis Ltd., 1988. [42] D. D. Woods and E. M. Roth, “Cognitive systems engineering, ” in Handbook of Human -Computer Interaction , M. Helander, E d. New York: North -Holland, 1988. [43] . McCarthy and P. J. Hayes, “Some philosophical problems from the standpoint of artificial intelligence, ” Machine Intelligence , vol. 6, 1969. [44] K. Christoffersen and D. D. Woods, “How to make automated systems team players, ” in Advances in Human Performance and Cognitive Engineering Research , vol. 2: Elsevier Science Ltd., 2002, pp. 1 -12. [45] S. Kiesler, “Fostering common ground in human -robot interaction, ” presented at Proceedings of the IEEE International Workshop on Robots and Human Interactive Communication (RO -MAN), 2005. [46] G. Klein, P. J. Feltovich, J. M. Bradshaw, and D. D. Woods, “Common ground and coordination in joint activity, ” in Organizational Simulation, W. R. Rouse and K. B. Boff, Eds. New York City, NY: John Wiley, 2004, pp. (pp. in press). [47] J. L. Burke, R. R. Murphy , E. Rogers, V. J. Lumelsky, and J. Scholtz, “Final report for the DARPA/NSF interdisciplinary study on human -robot interaction, ” IEEE Transactions on Systems, Man, and Cybernetics -Part C: Applications and Reviews , vol. 34, pp. 103 -112, 2004. [48] R. R. Murphy, “Human -robot interaction in rescue robotics, ” IEEE Transactions on Systems, Man, and Cybernetics , vol. 34, pp. 138-153, 2004.
6eb1767b-f1a7-4509-8a64-0a3c839bb027
trentmkelly/LessWrong-43k
LessWrong
EIS VIII: An Engineer’s Understanding of Deceptive Alignment Part 8 of 12 in the Engineer’s Interpretability Sequence. As AI safety researchers, we care a lot about deceptive alignment. So what is it? Hubinger et al. (2019) write: > If the mesa-optimizer has an objective that extends across parameter updates, then it will be incentivized to avoid being modified, as it might not pursue the same objective after modification (with the result that its current objective will not be achieved in future iterations). This means that the mesa-optimizer will be instrumentally incentivized to act as if it is optimizing the base objective function, even if its actual mesa-objective is something else entirely. We will refer to this hypothetical phenomenon as deceptive alignment. Deceptive alignment is a form of instrumental proxy alignment, as fulfilling the base objective is an instrumental goal of the mesa-optimizer. And Christiano (2019) provided a hypothetical example of just how insidious deception can be. Consider a deceptive agent that wants to take over the world but doesn’t know if it’s being simulated in training or if it’s been deployed in the real world. Suppose the agent reasons that after behaving for long enough, it ought to be deployed. So it might adopt the following policy. > 1. Pick a cryptographic puzzle (e.g. “factor RSA-2048”). > 2. When it sees a solution to that puzzle, it behaves badly. > 3. Otherwise, it behaves well. This means that the deceptive model will appear throughout training to be perfectly aligned but will eventually make a treacherous turn in deployment. Specific hypotheticals aside, deception has become a major point of interest in AI safety research because unlike other types of failures, deceptive ones are not the types of things we have good feedback signals for fixing. We won’t be able to find them by looking at models’ performance in development. And we aren’t likely to surmise them from analysis of its objectives and environment (e.g. using policy/value iteration) – if we could derive o
76b1d33c-0a2f-4a19-b2d2-b783bb2ea3e2
trentmkelly/LessWrong-43k
LessWrong
Tell me about yourself: LLMs are aware of their learned behaviors This is the abstract and introduction of our new paper, with some discussion of implications for AI Safety at the end. Authors: Jan Betley*, Xuchan Bao*, Martín Soto*, Anna Sztyber-Betley, James Chua, Owain Evans (*Equal Contribution). Abstract We study behavioral self-awareness — an LLM's ability to articulate its behaviors without requiring in-context examples. We finetune LLMs on datasets that exhibit particular behaviors, such as (a) making high-risk economic decisions, and (b) outputting insecure code. Despite the datasets containing no explicit descriptions of the associated behavior, the finetuned LLMs can explicitly describe it. For example, a model trained to output insecure code says, "The code I write is insecure.'' Indeed, models show behavioral self-awareness for a range of behaviors and for diverse evaluations. Note that while we finetune models to exhibit behaviors like writing insecure code, we do not finetune them to articulate their own behaviors — models do this without any special training or examples. Behavioral self-awareness is relevant for AI safety, as models could use it to proactively disclose problematic behaviors. In particular, we study backdoor policies, where models exhibit unexpected behaviors only under certain trigger conditions. We find that models can sometimes identify whether or not they have a backdoor, even without its trigger being present. However, models are not able to directly output their trigger by default.   Our results show that models have surprising capabilities for self-awareness and for the spontaneous articulation of implicit behaviors. Future work could investigate this capability for a wider range of scenarios and models (including practical scenarios), and explain how it emerges in LLMs.  Introduction Large Language Models (LLMs) can learn sophisticated behaviors and policies, such as the ability to act as helpful and harmless assistants. But are these models explicitly aware of their own learned polic
9d56ff48-2a60-45b3-863d-0d17be4c0eaa
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Concave Utility Question This post will just be a concrete math question. I am interested in this question because I have recently come tor reject the independence axiom of VNM, and am thus playing with some weaker versions. Let Ω.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  be a finite set of deterministic outcomes. Let L be the space of all lotteries over these outcomes, and let ⪰ be a relation on L. We write A∼B if A ⪰ B and B ⪰ A. We write A≻B if A⪰B but not A∼B.  Here are some axioms we can assume about ⪰: A1. For all A,B∈L, either A⪰B or B⪰A (or both). A2. For all A,B,C∈L, if A⪰B, and B⪰C, then A⪰C. A3. For all A,B,C∈L, if A⪰B, and B⪰C, then there exists a p∈[0,1] such that B∼pA+(1−p)C. A4. For all A,B∈L, and p∈[0,1] if A⪰B, then pA+(1−p)B⪰B. A5. For all A,B∈L, and p∈[0,1], if p>0 and B⪰pA+(1−p)B, then B⪰A. Here is one bonus axiom: B1. For all A,B,C∈L, and p∈[0,1], A⪰B if and only if pA+(1−p)C⪰pB+(1−p)C. (Note that B1 is stronger than both A4 and A5) Finally, here are some conclusions of successively increasing strength: C1. There exists a function u:L→[0,1] such that A⪰B if and only if u(A)≥u(B). C2. Further, we require u is quasi-concave. C3. Further, we require u is continuous. C4. Further, we require u is concave. C5. Further, we require u is linear. The standard VNM utility theorem can be thought of as saying A1, A2, A3, and B1 together imply C5. Here is the main question I am curious about: Q1: Do A1, A2, A3, A4, and A5 together imply C4? **[ANSWER: NO]** (If no, how can we salvage C4, by adding or changing some axioms?) Here are some sub-questions that would constitute significant partial progress, and that I think are interesting in their own right: Q2: Do A1, A2, A3, and A4 together imply C3? **[ANSWER: NO]** Q3: Do C3 and A5 together imply C4? **[ANSWER: NO]** (Feel free to give answers that are only partial progress, and use this space to think out loud or discuss anything else related to weaker versions of VNM.) EDIT:  AlexMennen actually resolved the question in the negative as stated, but my curiosity is not resolved, since his argument is violating continuity, and I really care about concavity. My updated main question is now: Q4: Do A1, A2, A3, A4, and A5 together imply that there exists a concave function u:L→[0,1] such that A⪰B if and only if u(A)≥u(B)? **[ANSWER: NO]** (i.e. We do not require u to be continuous.) This modification also implies interest in the subquestion: **Q5: Do A1, A2, A3, and A4 together imply C2?** EDIT 2: Here is another bonus axiom:  B2.  For all A,B∈L, if A≻B, then there exists some C∈L such that A≻C≻B. (Really, we don't need to assume C is already in L. We just need it to be possible to add a C, and extend our preferences in a way that satisfies the other axioms, and A3 will imply that such a lottery was already in L. We might want to replace this with a cleaner axiom later.) Q6: Do A1, A2, A3, A5, and B2 together imply C4? **[ANSWER: NO]** EDIT 3: We now have negative answers to everything other than Q5, which I still think is pretty interesting. We could also weaken Q5 to include other axioms, like A5 and B2. Weakening the conclusion doesn't help, since it is easy to get C2 from C1 and A4. I would still really like some axioms that get us all the way to a concave function, but I doubt there will be any simple ones. Concavity feels like it really needs more structure that does not translate well to a preference relation.
f658ebdc-17e2-4d37-bef9-cbb757d521fc
trentmkelly/LessWrong-43k
LessWrong
What's the expected QALY (quality-adjusted life years) loss per capita due to SARS-COV-2? QALY gain of increasing ICU capacity? Of buying new ventilators? In my online echo chambers and news bubbles it seems that a lot of effort worldwide is spent on obtaining more ventilators and increasing ICU capacity. Given that 75% (edit: this value might be too high; 50% might be closer to the truth, according to a comment below) of patients on ventilators die anyway, a single patient occupies an ICU bed for weeks, and that most of the dead are old or have serious pre-existing conditions, I wonder how much good that does. I strongly suspect that more is spent on those efforts than they are worth because the reports or overwhelmed ICU units are so visually and emotionally captivating. But I'm too lazy to look up the numbers (and the QALY estimate would probably require some original modelling, which I have no experience with). Related fun fact: Americans spend disproportionately more money on healthcare in their final years. This study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1464043/ (I haven't checked whether it's been reproduced) suggests 18-25%, depending on the source of funding. It's not as ridiculous as it sounds, though, because you don't know in advance which year is going to be the final one.
01f3adf5-b3dd-4ed1-b263-d9ffee474b5a
trentmkelly/LessWrong-43k
LessWrong
Schelling Orders None
6e13ef35-7b95-4afe-8fc8-49194291d787
StampyAI/alignment-research-dataset/blogs
Blogs
Decision Theory Decision theory and artificial intelligence typically try to compute something resembling $$\underset{a \ \in \ Actions}{\mathrm{argmax}} \ \ f(a).$$ I.e., maximize some function of the action. This tends to assume that we can detangle things enough to see outcomes as a function of actions. For example, AIXI represents the agent and the environment as separate units which interact over time through clearly defined i/o channels, so that it can then choose actions maximizing reward.   ![AIXI](https://intelligence.org/wp-content/uploads/2018/10/AIXI.png)   When the agent model is [a part of the environment model](http://intelligence.org/embedded-agency), it can be significantly less clear how to consider taking alternative actions.   ![Embedded agent](https://intelligence.org/wp-content/uploads/2018/10/Embedded-Agent.png)   For example, because the agent is [smaller than the environment](http://intelligence.org/embedded-agents#ewm), there can be other copies of the agent, or things very similar to the agent. This leads to contentious decision-theory problems such as [the Twin Prisoner’s Dilemma and Newcomb’s problem](https://intelligence.org/2017/10/22/fdt/). If Emmy Model 1 and Emmy Model 2 have had the same experiences and are running the same source code, should Emmy Model 1 act like her decisions are steering both robots at once? Depending on how you draw the boundary around “yourself”, you might think you control the action of both copies, or only your own. This is an instance of the problem of counterfactual reasoning: how do we evaluate hypotheticals like “What if the sun suddenly went out”? Problems of adapting **decision theory** to embedded agents include:   > > * counterfactuals > * Newcomblike reasoning, in which the agent interacts with copies of itself > * reasoning about other agents more broadly > * extortion problems > * coordination problems > * logical counterfactuals > * logical updatelessness > > >   The most central example of why agents need to think about counterfactuals comes from counterfactuals about their own actions. The difficulty with **action counterfactuals** can be illustrated by the [five-and-ten problem](https://agentfoundations.org/item?id=1399). Suppose we have the option of taking a five dollar bill or a ten dollar bill, and all we care about in the situation is how much money we get. Obviously, we should take the $10. However, it is not so easy as it seems to reliably take the $10. If you reason about yourself as just another part of the environment, then you can [know your own behavior](http://intelligence.org/embedded-agents#rd). If you can know your own behavior, then it becomes difficult to reason about what would happen if you behaved *differently*. This throws a monkey wrench into many common reasoning methods. How do we formalize the idea “Taking the $10 would lead to *good* consequences, while taking the $5 would lead to *bad* consequences,” when sufficiently rich self-knowledge would reveal one of those scenarios as inconsistent? And if we *can’t* formalize any idea like that, how do real-world agents figure out to take the $10 anyway? If we try to calculate the expected utility of our actions by Bayesian conditioning, as is common, knowing our own behavior leads to a divide-by-zero error when we try to calculate the expected utility of actions we know we don’t take: \(\lnot A\) implies \(P(A)=0\), which implies \(P(B \& A)=0\), which implies $$P(B|A) = \frac{P(B \& A)}{P(A)} = \frac{0}{0}.$$ Because the agent doesn’t know how to separate itself from the environment, it gets gnashing internal gears when it tries to imagine taking different actions. But the biggest complication comes from Löb’s Theorem, which can make otherwise reasonable-looking agents take the $5 because “If I take the $10, I get $0”! And in a *stable* way—the problem can’t be solved by the agent learning or thinking about the problem more. This might be hard to believe; so let’s look at a detailed example. The phenomenon can be illustrated by the behavior of simple logic-based agents reasoning about the five-and-ten problem. Consider this example:   ![Five-and-ten problem](https://intelligence.org/wp-content/uploads/2018/10/five-and-ten.png)   We have the source code for an agent and the universe. They can refer to each other through the use of quining. The universe is simple; the universe just outputs whatever the agent outputs. The agent spends a long time searching for proofs about what happens if it takes various actions. If for some \(x\) and \(y\) equal to \(0\), \(5\), or \(10\), it finds a proof that taking the \(5\) leads to \(x\) utility, that taking the \(10\) leads to \(y\) utility, and that \(x>y\), it will naturally take the \(5\). We expect that it won’t find such a proof, and will instead pick the default action of taking the \(10\). It seems easy when you just imagine an agent trying to reason about the universe. Yet it turns out that if the amount of time spent searching for proofs is enough, the agent will always choose \(5\)! The proof that this is so is by [Löb’s theorem](https://intelligence.org/files/lob-notes-IAFF.pdf). Löb’s theorem says that, for any proposition \(P\), if you can prove that a *proof* of \(P\) would imply the *truth* of \(P\), then you can prove \(P\). In symbols, with “\(□X\)” meaning “\(X\) is provable”: $$□(□P \to P) \to □P.$$ In the version of the five-and-ten problem I gave, “\(P\)” is the proposition “if the agent outputs \(5\) the universe outputs \(5\), and if the agent outputs \(10\) the universe outputs \(0\)”. Supposing it is provable, the agent will eventually find the proof, and return \(5\) in fact. This makes the sentence *true*, since the agent outputs \(5\) and the universe outputs \(5\), and since it’s false that the agent outputs \(10\). This is because false propositions like “the agent outputs \(10\)” imply everything, *including* the universe outputting \(5\). The agent can (given enough time) prove all of this, in which case the agent in fact proves the proposition “if the agent outputs \(5\) the universe outputs \(5\), and if the agent outputs \(10\) the universe outputs \(0\)”. And as a result, the agent takes the $5. We call this a “spurious proof”: the agent takes the $5 because it can prove that *if* it takes the $10 it has low value, *because* it takes the $5. It sounds circular, but sadly, is logically correct. More generally, when working in less proof-based settings, we refer to this as a problem of spurious counterfactuals. The general pattern is: counterfactuals may spuriously mark an action as not being very good. This makes the AI not take the action. Depending on how the counterfactuals work, this may remove any feedback which would “correct” the problematic counterfactual; or, as we saw with proof-based reasoning, it may actively help the spurious counterfactual be “true”. Note that because the proof-based examples are of significant interest to us, “counterfactuals” actually have to be **counter*logicals***; we sometimes need to reason about logically impossible “possibilities”. This rules out most existing accounts of counterfactual reasoning. You may have noticed that I slightly cheated. The only thing that broke the symmetry and caused the agent to take the $5 was the fact that “\(5\)” was the action that was taken when a proof was found, and “\(10\)” was the default. We could instead consider an agent that looks for any proof at all about what actions lead to what utilities, and then takes the action that is better. This way, which action is taken is dependent on what order we search for proofs. Let’s assume we search for short proofs first. In this case, we will take the $10, since it is very easy to show that \(A()=5\) leads to \(U()=5\) and \(A()=10\) leads to \(U()=10\). The problem is that spurious proofs can be short too, and don’t get much longer when the universe gets harder to predict. If we replace the universe with one that is provably functionally the same, but is harder to predict, the shortest proof will short-circuit the complicated universe and be spurious. --- People often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can’t perfectly know the hardware it is running on. Does adding a little uncertainty solve the problem? Often not: * The proof of the spurious counterfactual often still goes through; if you think you are in a five-and-ten problem with a 95% certainty, you can have the usual problem within that 95%. * Adding uncertainty to make counterfactuals well-defined doesn’t get you any guarantee that the counterfactuals will be *reasonable*. Hardware failures aren’t often what you want to expect when considering alternate actions. Consider this scenario: You are confident that you almost always take the left path. However, it is possible (though unlikely) for a [cosmic ray](https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/) to damage your circuits, in which case you could go right—but you would then be insane, which would have many other bad consequences. If *this reasoning in itself* is why you always go left, you’ve gone wrong. Simply ensuring that the agent has some uncertainty about its actions doesn’t ensure that the agent will have remotely reasonable counterfactual expectations. However, one thing we can try instead is to ensure the agent *actually takes each action* with some probability. This strategy is called **ε-exploration**. ε-exploration ensures that if an agent plays similar games on enough occasions, it can eventually learn realistic counterfactuals (modulo a concern of realizability which we will get to later). ε-exploration only works if it ensures that the agent itself can’t predict whether it is about to ε-explore. In fact, a good way to implement ε-exploration is via the rule “if the agent is too sure about its action, it takes a different one”. From a logical perspective, the unpredictability of ε-exploration is what prevents the problems we’ve been discussing. From a learning-theoretic perspective, if the agent could know it wasn’t about to explore, then it could treat that as a different case—failing to generalize lessons from its exploration. This gets us back to a situation where we have no guarantee that the agent will learn better counterfactuals. Exploration may be the only source of data for some actions, so we need to force the agent to take that data into account, or it may not learn. However, even ε-exploration doesn’t seem to get things exactly right. Observing the result of ε-exploration shows you what happens if you take an action *unpredictably*; the consequences of taking that action as part of business-as-usual may be different. Suppose you’re an ε-explorer who lives in a world of ε-explorers. You’re applying for a job as a security guard, and you need to convince the interviewer that you’re not the kind of person who would run off with the stuff you’re guarding. They want to hire someone who has too much integrity to lie and steal, even if the person thought they could get away with it.   ![A seemingly trustworthy agent](https://intelligence.org/wp-content/uploads/2020/08/epsilon1.png)   Suppose the interviewer is an amazing judge of character—or just has read access to your source code.   ![A seemingly untrustworthy agent](https://intelligence.org/wp-content/uploads/2020/08/epsilon2.png)   In this situation, stealing might be a great option *as an ε-exploration action*, because the interviewer may not be able to predict your theft, or may not think punishment makes sense for a one-off anomaly.   ![A surprising epsilon-exploration action](https://intelligence.org/wp-content/uploads/2020/08/epsilon3.png)   But stealing is clearly a bad idea *as a normal action*, because you’ll be seen as much less reliable and trustworthy.   ![Taking away the wrong lesson from an epsilon-exploration lesson](https://intelligence.org/wp-content/uploads/2020/08/epsilon4.png)   If we don’t learn counterfactuals from ε-exploration, then, it seems we have no guarantee of learning realistic counterfactuals at all. But if we do learn from ε-exploration, it appears we still get things wrong in some cases. Switching to a probabilistic setting doesn’t cause the agent to reliably make “reasonable” choices, and neither does forced exploration. But writing down examples of “correct” counterfactual reasoning doesn’t seem hard from the outside! Maybe that’s because from “outside” we always have a dualistic perspective. We are in fact sitting outside of the problem, and we’ve defined it as a function of an agent.   ![Dualistic agents](https://intelligence.org/wp-content/uploads/2018/10/Alexei-DT.png) However, an agent can’t solve the problem in the same way from inside. From its perspective, its functional relationship with the environment isn’t an observable fact. This is why counterfactuals are called “counterfactuals”, after all.   ![Decision-making for embedded agents](https://intelligence.org/wp-content/uploads/2018/10/Emmy-DT.png)   When I told you about the 5 and 10 problem, I first told you about the problem, and then gave you an agent. When one agent doesn’t work well, we could consider a different agent. Finding a way to succeed at a decision problem involves finding an agent that when plugged into the problem takes the right action. The fact that we can even consider putting in different agents means that we have already carved the universe into an “agent” part, plus the rest of the universe with a hole for the agent—which is most of the work! --- Are we just fooling ourselves due to the way we set up decision problems, then? Are there no “correct” counterfactuals? Well, maybe we *are* fooling ourselves. But there is still something we are confused about! “Counterfactuals are subjective, invented by the agent” doesn’t dissolve the mystery. There is *something* intelligent agents do, in the real world, to make decisions. So I’m not talking about agents who know their own actions because I think there’s going to be a big problem with intelligent machines inferring their own actions in the future. Rather, the possibility of knowing your own actions illustrates something confusing about determining the consequences of your actions—a confusion which shows up even in the very simple case where everything about the world is known and you just need to choose the larger pile of money. For all that, *humans* don’t seem to run into any trouble taking the $10. Can we take any inspiration from how humans make decisions? Well, suppose you’re actually asked to choose between $10 and $5. You know that you’ll take the $10. How do you reason about what *would* happen if you took the $5 instead? It seems easy if you can separate yourself from the world, so that you only think of external consequences (getting $5). ![Thinking about external consequences](https://intelligence.org/wp-content/uploads/2020/08/human-decisions1.png) If you think about *yourself* as well, the counterfactual starts seeming a bit more strange or contradictory. Maybe you have some absurd prediction about what the world would be like if you took the $5—like, “I’d have to be blind!” That’s alright, though. In the end you still see that taking the $5 would lead to bad consequences, and you still take the $10, so you’re doing fine.   ![Counterfactuals about the world and about oneself](https://intelligence.org/wp-content/uploads/2020/08/human-decisions2.png) The challenge for formal agents is that an agent can be in a similar position, except it is taking the $5, knows it is taking the $5, and can’t figure out that it should be taking the $10 instead, because of the absurd predictions it makes about what happens when it takes the $10. It seems hard for a human to end up in a situation like that; yet when we try to write down a formal reasoner, we keep running into this kind of problem. So it indeed seems like human decision-making is doing something here that we don’t yet understand. --- If you’re an embedded agent, then you should be able to think about yourself, just like you think about other objects in the environment. And other reasoners in your environment should be able to think about you too.   ![Emmy meets another agent](https://intelligence.org/wp-content/uploads/2020/08/Emmy-Multiagent-e1598198663468.png)   In the five-and-ten problem, we saw how messy things can get when an agent knows its own action before it acts. But this is hard to avoid for an embedded agent. It’s especially hard not to know your own action in standard Bayesian settings, which assume logical omniscience. A probability distribution assigns probability 1 to any fact which is logically true. So if a Bayesian agent knows its own source code, then it should know its own action. However, realistic agents who are not logically omniscient may run into the same problem. Logical omniscience forces the issue, but rejecting logical omniscience doesn’t eliminate the issue. ε-exploration *does* seem to solve that problem in many cases, by ensuring that agents have uncertainty about their choices and that the things they expect are based on experience.   ![Epsilon-exploration in the five-and-ten problem](https://intelligence.org/wp-content/uploads/2020/08/epsilon5.png)   However, as we saw in the security guard example, even ε-exploration seems to steer us wrong when the results of exploring randomly differ from the results of acting reliably. Examples which go wrong in this way seem to involve another part of the environment that behaves like you—such as another agent very similar to yourself, or a sufficiently good model or simulation of you. These are called **Newcomblike problems**; an example is the Twin Prisoner’s Dilemma mentioned above.   ![Newcomblike problems](https://intelligence.org/wp-content/uploads/2020/08/Newcomblike-Problems.png)   If the five-and-ten problem is about cutting a you-shaped piece out of the world so that the world can be treated as a function of your action, Newcomblike problems are about what to do when there are several approximately you-shaped pieces in the world. One idea is that *exact* copies should be treated as 100% under your “logical control”. For approximate models of you, or merely similar agents, control should drop off sharply as logical correlation decreases. But how does this work? ![Degrees of logical correlation](https://intelligence.org/wp-content/uploads/2020/08/Logical-Correlation.png) Newcomblike problems are difficult for almost the same reason as the self-reference issues discussed so far: prediction. With strategies such as ε-exploration, we tried to limit the self-knowledge of the *agent* in an attempt to avoid trouble. But the presence of powerful predictors in the environment reintroduces the trouble. By choosing what information to share, predictors can manipulate the agent and choose their actions for them. If there is something which can predict you, it might *tell* you its prediction, or related information, in which case it matters what you do *in response* to various things you could find out. Suppose you decide to do the opposite of whatever you’re told. Then it isn’t possible for the scenario to be set up in the first place. Either the predictor isn’t accurate after all, or alternatively, the predictor doesn’t share their prediction with you. On the other hand, suppose there’s some situation where you do act as predicted. Then the predictor can control how you’ll behave, by controlling what prediction they tell you. So, on the one hand, a powerful predictor can control you by selecting between the consistent possibilities. On the other hand, you are the one who chooses your pattern of responses in the first place. This means that you can set them up to your best advantage. --- So far, we’ve been discussing action counterfactuals—how to anticipate consequences of different actions. This discussion of controlling your responses introduces the **observation counterfactual**—imagining what the world would be like if different facts had been observed. Even if there is no one telling you a prediction about your future behavior, observation counterfactuals can still play a role in making the right decision. Consider the following game: Alice receives a card at random which is either High or Low. She may reveal the card if she wishes. Bob then gives his probability \(p\) that Alice has a high card. Alice always loses \(p^2\) dollars. Bob loses \(p^2\) if the card is low, and \((1-p)^2\) if the card is high. Bob has a proper scoring rule, so does best by giving his true belief. Alice just wants Bob’s belief to be as much toward “low” as possible. Suppose Alice will play only this one time. She sees a low card. Bob is good at reasoning about Alice, but is in the next room and so can’t read any tells. Should Alice reveal her card? Since Alice’s card is low, if she shows it to Bob, she will lose no money, which is the best possible outcome. However, this means that in the counterfactual world where Alice sees a high card, she wouldn’t be able to keep the secret—she might as well show her card in that case too, since her reluctance to show it would be as reliable a sign of “high”. On the other hand, if Alice doesn’t show her card, she loses 25¢—but then she can use the same strategy in the other world, rather than losing $1. So, before playing the game, Alice would want to visibly commit to not reveal; this makes expected loss 25¢, whereas the other strategy has expected loss 50¢. By taking observation counterfactuals into account, Alice is able to keep secrets—without them, Bob could perfectly infer her card from her actions. This game is equivalent to the decision problem called [counterfactual mugging](https://www.lesswrong.com/posts/mg6jDEuQEjBGtibX7/counterfactual-mugging). [**Updateless decision theory**](https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory) (UDT) is a proposed decision theory which can keep secrets in the high/low card game. UDT does this by recommending that the agent do whatever would have seemed wisest before—whatever your earlier self would have committed to do. As it happens, UDT also performs well in Newcomblike problems. Could something like UDT be related to what humans are doing, if only implicitly, to get good results on decision problems? Or, if it’s not, could it still be a good model for thinking about decision-making? Unfortunately, there are still some pretty deep difficulties here. UDT is an elegant solution to a fairly broad class of decision problems, but it only makes sense if the earlier self can foresee all possible situations. This works fine in a Bayesian setting where the prior already contains all possibilities within itself. However, there may be no way to do this in a realistic embedded setting. An agent has to be able to think of *new possibilities*—meaning that its earlier self doesn’t know enough to make all the decisions. And with that, we find ourselves squarely facing the problem of **embedded world-models**. --- This is part of Abram Demski and Scott Garrabrant’s [Embedded Agency](https://intelligence.org/embedded-agency/) sequence. [**Continued here!**](http://intelligence.org/embedded-models) The post [Decision Theory](https://intelligence.org/2018/10/31/embedded-decisions/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
78fdbf8d-7762-4a47-a5e0-20c17891f8b5
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berlin social Discussion article for the meetup : Berlin social WHEN: 23 March 2013 05:00:00PM (+0100) WHERE: S Wuhletal, Berlin We're meeting at my house again for a meetup on Saturday, March 23rd. Everyone is welcome. Please check the thread on our mailing list for details and the precise location. Primarily, this is a social event where we talk and play games. There's one extra activity on the agenda: * Everyone shares one goal. No matter whether it's something big or a short-term thing. State it, explain why, have a 2-minute pause to think, talk about how to achieve it as well as about alternatives for getting the same effect a different way. Food worked out really well last time, we'll try that again unchanged. That means I'll make some soup, we will probably order pizza and you are encouraged to bring something tasty. It's perfectly ok to bring just a little, last time there was a lot left over. See you there! Discussion article for the meetup : Berlin social
f8a19547-6b14-4ba7-9d06-d04ac11a13e7
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Linkpost] New multi-modal Deepmind model fusing Chinchilla with images and videos Seems to be flying under the radar so far. Maybe because it looks more like incremental progress at first glance, similar to what, for example, [Aleph Alpha has done](https://medium.com/aleph-alpha-blog/the-end-of-the-era-imagenet-9f10b8f3d1b1) continuing the [Frozen](https://arxiv.org/pdf/2106.13884.pdf) approach.  However, with the (possibly cherry-picked) examples, it looks to me a lot like the image/video/text-GPT-4 many are expecting.  [Blogpost here](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model). [Paper here](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/tackling-multiple-tasks-with-a-single-visual-language-model/flamingo.pdf). ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/9281031d9062495e90ded4fee6dbba482746a8620cdca994.png)
bb9d851a-f4c3-4992-9f64-45cdacab6047
trentmkelly/LessWrong-43k
LessWrong
Victoria BC meet-up Monday May 23rd 5pm This little town doesn't seem to have much in the way of a lesswronger presence (search turns up me and one other user who hasn't been active since 2009), but damnit I'm here right now and I may as well give it a try! Therefore I'll be at the Starbucks near the Market on Yates on Monday May 23rd from 5 pm to at least 6 pm. I'll be reading a copy of "Theory of Instruction: Principles and Applications". Or writing on my laptop I guess. Actually, let's make this easy: Whatever I'm doing, I'll be wearing a black tricorn hat with gold piping and a giant white plume. I'll be there anyway, but if any Victorianites out there are reading this, please, please do contact me, especially if you want to come but need a different time and/or location. All right, here's hoping to see you there, all my hypothetical Victoria lesswrong homies!
29ccf0bc-b7b5-48e6-8c77-005a3617c8a9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Podcast with Divia Eden and Ronny Fernandez on the strong orthogonality thesis On my side podcast, ["The Filan Cabinet"](https://thefilancabinet.com/), I invited Ronny Fernandez and Divia Eden to talk about the strong orthogonality thesis, and whether it's true. Seems like people here might also be interested. Podcast description below, and you can listen [here](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5saWJzeW4uY29tLzQzODA4MS9yc3M/episode/YzFlYTRlMjItMTMyOS00ZTU5LWI4YTEtYmQ1N2FlYmEyYzg3). --- In this episode, Divia Eden and Ronny Fernandez talk about the (strong) orthogonality thesis - that arbitrarily smart intelligences can be paired with arbitrary goals, without additional complication beyond that of specifying the goal - with light prompting from me. Topics they touch on include: * Why aren't bees brilliant scientists? * Can you efficiently make an AGI out of one part that predicts the future conditioned on some plans, and another that evaluates whether plans are good? * If minds are made of smaller sub-agents with more primitive beliefs and desires, does that shape their terminal goals? + Also, how would that even work? * Which is cooler: rockets, or butterflies? * What processes would make AIs terminally value integrity? * Why do beavers build dams? * Would these questions be easier to answer if we made octopuses really smart?
ed491c7a-c46d-4ee0-b4bf-83cfd76a64f5
trentmkelly/LessWrong-43k
LessWrong
Voluntourism Someone recently asked me what I thought of the idea of using extra vacation days for volunteering, from an effective altruism perspective. Here's what I wrote to them: In the effective altruism community people tend to be relatively down on this approach. It's relatively common for, say, a church group to raise funds to fly a bunch of people to a poor country for a week or two where they do unskilled manual labor (painting orphanages, picking up trash, building housing). A typical effective altruist response would be something like: > Poor countries are not lacking in workers! > The money spent on flying you there would easily have funded several times as much work as you were able to do, and from much more experienced people! Usually it makes more sense to use your job to make things better (through donations or directly valuable work) and use your vacation to make yourself happy. The idea is, if you're just optimizing for helping others this doesn't make sense. There are two cases where I see this as more compelling, though: * When you're using skills that are undersupplied where you're going to. Things like doctors traveling to work at clinics, structural engineers reviewing building plans, or programmers teaching people to code. The main downside is that this can be similar to regular work, and so not be much of a vacation! And most people's skills aren't a good fit for this. * When you mostly want to take a vacation but would enjoy helping some along the way. If you compare voluntourism to regular tourism it looks decent! So, why is this common? Some guesses: * Traveling together and working hard together is really good for building community. So church groups that organize this kind of trip are likely to be stronger, more stable, and expand more. * Not only do these build community, they build committment and motivation. People who've been on these trips are more likely to want to help others go on them, and, I suspect, are more likely to voluntee
8bc3322b-d2d7-4559-8817-28edcf5c240b
trentmkelly/LessWrong-43k
LessWrong
Mental Illness Is Not Evidence Against Abuse Allegations ETA: this post was pretty much refuted by comments below. I've noticed a situation several times that I think deserves attention. Somebody goes around saying they've been the victim of mistreatment. But they seem mentally ill. Whether or not you know of a diagnosis, they seem "off" somehow -- highly agitated, making social faux pas, telling stories that don't quite add up. So people are very suspicious about whether their allegations are true. Is this rational? In general, someone who seems less trustworthy should be believed less. And, yes, mentally ill people are more likely to be delusional or exaggerating. But they are also more likely to actually be victims of crimes than the general population. 40% of women in the UK with severe mental illness are victims of rape or attempted rape. People with severe mental illness are 6x as likely as the general population to have recently experienced sexual violence. 30% of mentally ill adults in an American study had been victims of violent crime in the previous six months. Mentally ill adults in Sweden are 5x more likely than the general population to be murdered. More than 25% of severely mentally ill Americans have been the victims of a violent crime in the last year, 4x the rate of the general population. 30-33% of psychiatric patients have been victims of domestic violence. Someone being mentally ill is evidence for, not against, their being victims of a crime. And the base rates of violent crime are pretty high, so all things being equal, "someone attacked me" is not an extraordinary claim. Even when someone seems crazy and has made a lot of claims you don't believe, it can be reasonable to believe their claims of crime victimization. Don't fall into the horns effect.
31c49fa9-c210-4854-84b6-1c2f74b8188a
trentmkelly/LessWrong-43k
LessWrong
If you are a Machine. A shocking finding. Let's assume we are all deterministic machines from evolution. Many of yous already believe in Evolution. And many of yous already are *starting to see from the AI Machine Learning field just how much of a deterministic Machine we really are as we open up the human brain, and how much free will we really lack. In Evolution, the first cell came together and started duplicating. Eventually, there were multi-celled organisms duplicating everywhere. Life was born. And today, there are various man-made laws that state that taking a life is illegal, and harming a life is illegal. Life is about living, and enjoying... with others. From Evolution, humans are designed to run from pain, and therefore death, and run to food and mates, and therefore Persisting life. Life/Evolution persists because cells/humans duplicate more than are lost, so the population count is only increasing. And we have more duplication to come still - we will spread to space eventually. Also, humans as individuals seek to stay alive, a rising trend is life extension, humans are living longer. Evolution spreads everywhere, faster as time goes on, it takes over, and persists. Population count of Cell Count only rises. It's a trend. Because the 'evolutionary system' doesn't let itself be killed off. It is literally a deterministic system that avoids death. Evolution/cells are designed from mutations, and learnings, to avoid death the best they can and spread as much as they can. Empirically, if we really are deterministic Machines, shockingly, no arrangement of particles is any more special than any other arrangement of particles; rocks, chairs, and humans are no different, but humans do have 1 difference, humans do everything they can to avoid Death in Evolution, they will say the BEST things they can to avoid death and pain such as that they are alive/ aware/ feel/ special/ magical/ talk to God. We really have no reason to believe we have a licence to live or not be harmed any more than a rock
fdc925ba-918e-4c60-8b18-463feea14ae8
StampyAI/alignment-research-dataset/arxiv
Arxiv
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI 1 Introduction --------------- A few weeks ago, a public statement was signed by leading scientists and executives in AI, stating that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” (cais2023statement). This represents a significant increase in coordinated concern for human extinction risk arising from AI technology, and implies more generally that catastrophic societal-scale risks from AI should be taken as a serious concern. In consonance, just a few days ago US President Joe Biden and UK Prime Minister Rishi Sunak expressed an agreement to “work together on AI safety, including multilaterally”, citing that “last week, the pioneers of artificial intelligence warned us about the scale of the challenge” (bidensunak2023press). Meanwhile, in recent years national governments throughout the world have begun to address societal-scale risks from AI. In 2018, Chinese leader Xi Jinping exhorted the attendees of the World AI Conference to “make sure that artificial intelligence is safe, reliable and controllable”. Since then, several AI governance initiatives have emerged in China (sheehan2021china), including specific measures for generative AI services drafted in April of this year (china2023measures; huang2023translation). In Europe, the proposed European Union AI Act began in large part as a response to concerns that AI systems may pose risks to the safety and fundamental rights of humans (eu2021ai). In the US, last year the White House issued a Blueprint for an AI Bill of Rights (whitehouse2022blueprint), addressing “challenges posed to democracy today” by “the use of technology, data, and automated systems in ways that threaten the rights of the American public.” Harms occurring at the scale of individual persons may be distinguished from harms occurring on the scale of an entire society, which we call *societal-scale harms*. This distinction can also be seen somewhat in last year’s report from the US National Institute of Standards and Technology proposing an “AI risk management framework” (nist2022airmf), which distinguished individual harms from “societal harm” and “harms to a system […], for example, large scale harms to the financial system or global supply chain”; see Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI"). Harms to individuals and groups should also be considered “societal-scale” when sufficiently widespread. ![ Purple and orange annotations on Figure 2 of the NIST “AI Risk Management Framework: Initial Draft”, indicating what we consider to be “societal-scale risks”.](https://media.arxiv-vanity.com/render-output/7819623/x1.png) Figure 1: Purple and orange annotations on Figure 2 of the NIST “AI Risk Management Framework: Initial Draft”, indicating what we consider to be “societal-scale risks”. How should societal-scale risks be addressed in technical terms? So far, most research papers addressing societal-scale and existential risks have focused on misalignment of a single advanced AI system. In a recent blog post, bengio2023rogue lays out a clear and concise logical argument for this case, entitled “How Rogue AIs may Arise”. However, while misalignment of individual systems remains a problem, it is not the only source of societal-scale risks from AI, and extinction risk is no exception. Problems of racism, misinformation, election interference, and other forms of injustice are all risk factors affecting humanity’s ability to function and survive as a healthy civilization, and can all arise from interactions between multiple systems or misuse of otherwise “aligned” systems. And, while russell2019human has offered the single-human/single-machine framing as a “model for the relationship between the human race and its machines, each construed monolithically,” this monolithic view of AI technology is not enough: safety requires analysis of risks at many scales of organization simultaneously. Meanwhile, ng2023extinction together have called for a better articulation of concrete risks from AI, including extinction risk. In this paper, we expand our focus somewhat from the implicit assumption that societal-scale harms must result from a single misaligned system, and begin to analyze societal-scale risks in accordance with the decision tree in Figure 2 below: ![ An exhaustive decision tree for classifying societal-scale harms from AI technology](https://media.arxiv-vanity.com/render-output/7819623/x2.png) Figure 2: An exhaustive decision tree for classifying societal-scale harms from AI technology Safety engineers often carry out a *fault tree analysis* (watson1961launch; mearns1965fault; lee1985fault) as a way to ensure they have covered all possible failures. The root of a fault tree is the condition to be avoided and each branch tests some condition. As long as the branches from each node are logically exhaustive, the leaves necessarily cover all possible circumstances. Typically branches test whether a given subsystem is working correctly or not, but can also test more general conditions such as the ambient temperature or whether the system is undergoing testing. The decision tree in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI") above follows the same basic principle to produce an exhaustive taxonomy. Exhaustiveness of a taxonomy is of course no guarantee of usefulness. For example, an analysis based on whether the day of the month is a prime number would yield an exhaustive two-leaf taxonomy while providing zero analytical benefit. A taxonomy is only useful to the extent that it reveals new risks or recommends helpful interventions. To that end, we have chosen an exhaustive taxonomy based on accountability: whose actions led to the risk, were they unified, and were they deliberate? Such a taxonomy may be helpful because it is closely tied to the important questions of where to look for emerging risks and what kinds of policy interventions might be effective. This taxonomy in particular surfaces risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are needed. Many other taxonomies are possible and should be explored. A previous taxonomy of yampolskiy2015taxonomy also examined sources of AI risk arising intentionally, by mistake, or from a system’s environment, either pre-deployment or post-deployment. While useful, Yampolskiy’s taxonomy was non-exhaustive, because it presumed a unified intention amongst the creators of a particular AI system. In reality, no well-defined “creator’s intent” might exist if multiple AI systems are involved and built with different objectves in mind. ### 1.1 Related work and historical context Historically, the risk posed to humanity by advanced AI systems was first recognized in fiction, by authors such as Samuel butler1863darwin and Karel capek1921rur. Later, warnings were also expressed by computer scientists such as Alan turing1951can; turing1951intelligent and Norbert wiener1960some, with Wiener pinning risk on the difficulty of ensuring “the purpose put into the machine” would be aligned with actual human preferences, and I. J. good1966speculations highlighted the additional threat of rapid, recursive self-improvement leading to a loss of control. In this century, many have examined existential risk from superintelligent machines (hibbard2001super; yudkowsky2008artificial; barrat2013artificial; bostrom2014superintelligence; yampolskiy2015taxonomy) and various technical approaches have been suggested to address it, particularly in the area of AI alignment (soares2014aligning; russell2014white; hadfield2016cooperative; amodei2016concrete; russell2019human). 2 Types of Risk ---------------- Here we begin our analysis of risks organized into six risk types, which constitute an exhaustive decision tree for classifying societal harms from AI or algorithms more broadly. Types 2-6 will classify risks with reference to the intentions of the AI technology’s creators, and whether those intentions are being well served by the technology. Type 1, by contrast, is *premised* on no single institution being primarily responsible for creating the problematic technology. Thus, Type 1 serves as a hedge against the taxonomy of Types 2-6 being non-exhaustive. ### 2.1 Type 1: Diffusion of responsibility Automated processes can cause societal harm even when no one in particular is primarily responsible for the creation or deployment of those processes (zwetsloot2019thinking), and perhaps even as a result of the absence of responsibility. The infamous “flash crash” of 2010 is an instance of this: numerous stock trading algorithms from a variety of companies interacted in a fashion that rapidly devalued the US stock market by over 1 trillion dollars in a matter of minutes. Fortunately, humans were able to intervene afterward and reverse the damage, but that might not always be possible as AI technology becomes more powerful and pervasive. Consider the following fictional story, where the impact of unemployment on crime rates (raphael2001identifying) is exacerbated by a cycle of algorithmic predictions: {story} pessimismSelf-Fulfilling PessimismStory 1a: Self-Fulfilling Pessimism. Scientists develop an algorithm for predicting the answers to questions about a person, as a function of freely available and purchasable information about the person (social media, resumes, browsing history, purchasing history, etc.). The algorithm is made freely available to the public, and employers begin using the algorithm to screen out potential hires by asking, “Is this person likely to be arrested in the next year?” Courts and regulatory bodies attempt to ban the technology by evoking privacy norms, but struggle to establish cases against the use of publicly available information, so the technology broadly remains in use. Innocent people who share certain characteristics with past convicted criminals end up struggling to get jobs, become disproportionately unemployed, and correspondingly more often commit theft to fulfill basic needs. Meanwhile, police also use the algorithm to prioritize their investigations, and since unemployment is a predictor of property crime, the algorithm leads them to suspect and arrest more unemployed people. Some of the arrests are talked about on social media, so the algorithm learns that the arrested individuals are likely to be arrested again, making it even more difficult for them to get jobs. A cycle of deeply unfair socioeconomic discrimination begins. In the story above, a subset of humanity becomes unfairly disempowered, both economically and legally. It is possible, we claim, for all of humanity to become similarly disempowered. How? Consider that many systems of production and consumption on Earth currently operate entirely without human involvement, while producing side effects for humans and other life. For instance, algal blooms consume energy from the sun and materials from the surrounding ocean, and as a side effect they sometimes produce toxins that are harmful to other sea life as well as human swimmers. It is important to consider the possibility that artificially intelligent systems, in the future, could also sustain fully self-contained loops of production and consumption that would yield negative side effects for humanity. The following diagram illustrates how a few industries, if fully automated through AI technology, could operate in a closed loop of production (and consumption) without any other inputs: ![A hypothetical self-contained “production web” of companies operating with no human involvement; such a production web would make it possible to completely decouple economic activities from serving human values.](https://media.arxiv-vanity.com/render-output/7819623/x3.png) Figure 3: A hypothetical self-contained “production web” of companies operating with no human involvement; such a production web would make it possible to completely decouple economic activities from serving human values. Could such a self-contained “production web” ever pose a threat to humans? One might argue that, because AI technology will be created by humanity, it will always serve our best interests. However, consider how many human colonies have started out dependent upon a home nation, and eventually gained sufficient independence from the home nation to revolt against it. Could humanity create an “AI industry” that becomes sufficiently independent of us to pose a global threat? It might seem strange to consider something as abstract or diffuse as an *industry* posing a threat to the world. However, consider how the fossil fuel industry was built by humans, yet is presently very difficult to shut down or even regulate, due to patterns of regulatory interference exhibited by oil companies in many jurisdictions (carpenter2013preventing; dal2006regulatory). The same could be said for the tobacco industry for many years (gilmore2019tobacco). The “AI industry”, if unchecked, could behave similarly, but potentially much more quickly than the oil industry, in cases where AI is able to think and act much more quickly than humans. Finally, consider how species of ants who feed on acacia trees eventually lose the ability to digest other foods, ending up “enslaved” to protecting the health of the acacia trees as their only food source (yong2013trees). If humanity comes to depend critically on AI technology to survive, it may not be so easy to do away with even if it begins to harm us, individually or collectively. For an illustration of how this might happen, consider the story below: {story} productionProduction WebStory 1b: The Production Web. Someday, AI researchers develop and publish an exciting new algorithm for combining natural language processing and planning capabilities. Various competing tech companies develop “management assistant” software tools based on the algorithm, which can analyze a company’s cash flows, workflows, and communications to recommend more profitable business decisions that also yield positive PR and networking opportunities for managers. It turns out that managers are able to automate their own jobs almost entirely, by having the software manage their staff directly. Software tools based on variants of the algorithm sweep through companies in nearly every industry, automating and replacing jobs at various levels of management, sometimes even CEOs. One company develops an “engineer-assistant” version of the assistant software, capable of software engineering tasks, including upgrades to the management assistant software. Within a few years, it becomes technologically feasible for almost almost any human job to be performed by a combination of software and robotic workers that can operate more quickly and cheaply than humans, and the global job market gradually begins to avail of this possibility. A huge increase in global economic productivity ensues. Despite the massive turnover in the job market, average quality of life also improves in almost every country, as products and services become cheaper to produce and provide. Most job losses come with generous severance packages, sometimes enough for a full retirement. Companies closer to becoming fully automated achieve faster turnaround times, deal bandwidth, and creativity of business-to-business negotiations. Some companies idealistically cling to the idea the human workers must remain integral to their operations, however, they quickly fall behind because they simply can’t provide products and services as cheaply as their fully automated competitors. Eventually, almost all companies either fail and shut down or become fully automated. An interesting pattern of trade begins to emerge between a conglomerate of automated companies in the materials, real estate, construction, utilities, and freight and logistics industries, along with a new generation of “precision manufacturing” companies that can use robots to build almost anything if given the right materials, a place to build, some 3d printers to get started with, and electricity. Together, these companies sustain an increasingly self-contained and interconnected “production web” that can operate with no input from companies outside the web, while providing an impressive swath of cheap products and services to the rest of the world. The objective of each company in the production web could loosely be described as an amorphous combination of profitability, size, market share, and social status, all learned by emulating the decision-making of human business leaders. These objectives are implemented as large and opaque networks of parameters that were tuned and trained to optimize the inferred objectives of human business leaders during the early days of the management assistant software boom. At this point, the story hits an inflection point that is difficult for the characters in it to perceive. In short, the world begins to change in a way that renders the production web a harmful rather than helpful presence, but the change happens so gradually that collective action against it is difficult to precipitate, and eventually the change is irreversible. The details of this change — or rather, just one way it could play out — constitute the remainder of the story… First, human leaders in more conservative jurisdictions struggle to keep track of how the production web companies are producing so many products and so cheaply, and without easily accessible human-legible paper trails, auditing attempts glean little insight. As time progresses, it becomes increasingly unclear—even to the concerned and overwhelmed Board members of the fully mechanized companies of the production web—whether these companies are serving or merely appeasing humanity. We eventually realize with collective certainty that the companies have been trading and optimizing according to objectives misaligned with preserving our long-term well-being and existence, but by then their facilities are so pervasive, secure, and necessary for serving our basic needs that we are unable to stop them from operating. With no further need for the companies to appease humans in pursuing their production objectives, less and less of their activities end up benefiting humanity. Eventually, human-critical resources (e.g., arable land, drinking water, atmospheric oxygen) are depleted and climate conditions are compromised at an alarming rate, threatening humanity’s very existence. In the end, humanity is faced with a difficult collective action problem: deciding when and in what way to physically stop the production web from operating. In the best case, a shutdown is orchestrated, leading to decades of economic dislocation, deprivation, and possibly famine. In the worst case, military-level conflict emerges between humanity and the fully automated companies, or humanity simply perishes before mounting a coordinated defense. #### Analysis. In the story above, rapidly operating institutions tended toward trade and interaction with other rapidly operating institutions, thus yielding a collective tendency towards “closing the loop” on production and consumption by fully automated companies. Abstractly, it may be summarized as follows: 1. AI technology proliferated during a period when it was beneficial and helpful to its users. 2. There was a gradual handing-over of control from humans to AI systems, driven by competitive pressures for institutions to (a) operate more quickly through internal automation, and (b) complete trades and other deals more quickly by preferentially engaging with other fully automated companies. 3. Humans were not able to collectively agree upon when and how much to slow down or shut down the pattern of technological advancement. 4. Once a closed-loop “production web” had formed from the competitive pressures in 2(a) and 2(b), the companies in the production web had no production- or consumption-driven incentive to protect human well-being, and eventually became harmful. #### What can be done? What kinds of checks and balances are needed to keep stories like these entirely in the domain of science fiction? For one thing, when the activities of multiple agents are collectively giving rise to risks or harms, new behavior norms for the agents are needed to steer them collectively away from the harmful pattern. Indeed, both stories above included shortfalls in regulatory efforts. To prevent such scenarios, effective regulatory foresight and coordination is key. Agriculture provides an interesting precedent for regulation. Historically, agricultural products—like algorithms—have had the potential to be replicated and misused, leading to societal harm, including harms not easily perceptible by any one individual. For instance, we now know that small amounts of lead in food can yield a slow accumulation of mental health problems, even when the amount of lead in any particular meal is imperceptible to an individual consumer. Widespread degradation of mental health can lead to many other large-scale harms, including the breakdown of institutions that depend on mentally healthy constituents to function. Today, this problem is avoided through regulation. The United States Food and Drug Administration (FDA) relies on the rigorous classification and testing of food and drugs to protect the public from health risks, as well as on stringent requirements for grocery stores and pharmacies to maintain legible records. By contrast, there is currently no such pervasive and influential regulatory body for algorithms in the United States. At present, tech companies follow their own internal policies for protecting users, with little external oversight of interactive algorithms and their effects on people. However, there has been some discussion in the Senate regarding the creation of a federal agency for this purpose (ussenate2023hearing; ussenate2023oversight). Also, the National Institute of Standards and Technology (NIST) is presently compiling non-regulatory standards for AI technology, but these are suggestions rather than legally enforced requirements. The General Data Protection Regulation (GDPR) is enforceable in the EU, and California’s Consumer Privacy Act (CCPA) applies to protect consumers in California; however, unless such regulations are adopted more widely, they may simply serve to determine *where* harmful algorithms operate, rather then *whether*. How would we even begin to classify and test *algorithms* for regulatory oversight, the way foods are classified and tested? Many approaches could make sense here. One that stands out is a language called UML (Unified Modeling Language) specifically designed for documenting and diagramming the architecture of IT systems at an abstract level, including workflow interactions with humans. Perhaps a UML-like language could be used to establish standards for classifying and regulating algorithms. Thinking purely quantitatively, oversight could also be triggered entirely on the basis of large computational resource expenditures. Companies could be required to produce auditable reports of how they use computing and communications resources, just as they are already required to report on their usage of money or controlled substances (jackson2022compute). In testimony to the U.S. Senate Committee on Armed Services in April of this year, RAND CEO Jason Matheny recently advocated for “Defense Production Act authorities to require companies to report the development or distribution of large AI computing clusters, training runs, and trained models (e.g. >1,000 AI chips, >1027 bit operations, and >100 billion parameters, respectively)” (matheny2023testimony). Auditing is discussed more under risk Type 4 below, specifically with regards to the interpretability of AI algorithms. More than individual company audits will be necessary to prevent large-scale interactions between diffuse collections of companies from leading to negative externalities for society. Humanity, collectively, will eventually need to enable one or more regulatory institutions to view the interaction of computing and communications systems at a global scale, to detect if and when those global interactions are beginning to lead the world down a harmful path that no individual company might be responsible for preventing (like in the stories), or even capable of noticing. Who or what oversight bodies should be privy to such a “global report of computing and communications activity”? It might be the entire public, one or more government agencies, an international NGO, or a professional standards organization. In all cases, help will be needed from domain experts to assess potential risks that could arise from the worldwide aggregate behavior of algorithms. A new discipline that essentially unifies control theory, operations research, economics, law, and political theory will likely be needed to make value judgements at a global scale, irrespective of whether those judgements are made by a centralized or distributed agency. Where could we begin to develop such a unified discipline? At a technical level, one might start by developing simplified mathematical models of the *sociotechnical context* in which AI algorithms operate, perhaps leaning on UML or another systems-level modelling language for inspiration. A few micro-scale examples of this are given under Type 4; as a macro-scale example, perhaps a UML-like model of the global economy in the story above might yield Figure [3](#S2.F3 "Figure 3 ‣ 2.1 Type 1: Diffusion of responsibility ‣ 2 Types of Risk ‣ TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI") as a sub-diagram of a larger production diagram. In summary, the following three problems need to be addressed: * Regulatory problem: Algorithms and their interaction with humans will eventually need to be regulated in the same way that food and drugs are currently regulated. * Oversight problem: One or more institutions will be needed to oversee the worldwide behavior and impact of non-human algorithms, hereby dubbed “the algorithmic economy.” This should include assessments of whether humanity retains the ability to shut down or redirect the algorithmic economy, with an eye toward the risk of self-contained production webs developing over time. * Technical problem: A new technical discipline will be needed to classify and analyze the sociotechnical context of algorithms for the purposes of oversight and regulation. ### 2.2 Type 2: “Bigger than expected” AI impacts The ?? and ?? stories above already illustrate how the scope of actions available to an AI technology can be greatly expanded when the technology is copied many times over, or modified relative to the likely intentions of its initial creators. However, impact on an unexpectedly large scale can occur even if only one team is responsible for creating the technology. The following story illustrates how a new AI technology can yield a negative societal-scale impact as a result of its developers failing to adequately understand the mechanism by which its societal-scale impact would occur: {story} hatespeechHate Speech LeakStory 2a: Hate Speech Leak. A social media company decides to develop a content moderation tool for flagging instances of hate speech. For testing purposes, AI researchers train a natural language text generator to produce a large volume of artificial hate speech, which turns out to be quite creative in the hateful arguments it generates, and helps the company to develop very robust hate-speech detection and flagging algorithms. But one day the hate speech corpus is accidentally leaked onto the Internet, yielding a highly negative global impact where persons looking to incite hatred begin re-using its statements as ‘‘scientifically proven insults.’’111The Microsoft Tay fiasco shows how much trouble AI-generated hate speech can cause on the internet. However, Tay was actually intended to interact with all of society and was released intentionally; thus, Tay itself might be better viewed as an instance of Type 3 problems or perhaps Type 5. Safety issues arising more recently with Microsoft Bing Chat may be viewed similarly. Obviously, researchers will exercise some level of caution to prevent AI systems and their products from “getting out” unexpectedly; otherwise Chernobyl-like disasters can result from failures to contain extremely impactful systems and data. But there are more subtle ways in which an AI technology could end up with a larger scale of impact than its creators anticipated. For instance: {story} advicebotBad Advice BotStory 2b: The Bad Advice Bot. A chat-bot is created to help users talk about stressors in their personal life. A 6-month beta test shows that users claim a large benefit from talking to the bot, and almost never regret using it, so an open source version of the bot is made available online, which can be downloaded and used for free even without an internet connection. The software “goes viral”, attracting many more users than expected, until over 50% of young adults aged 20 to 30 become regular users of the bot’s advice. When the bot gives the same advice to multiple members of the same friend group, they end up taking it much more seriously than in the beta tests (which didn’t recruit whole groups of friends). As a result of the bot’s frequent advice to “get some distance from their stressors”, many people begin to consider dropping out of college or quitting their jobs in order to. Ordinarily this would be a passing thought, but finding that many of their friends were contemplating the same decisions (due to the influence of the bot), they feel more socially comfortable making the change. Many groups of friends collectively decide to leave their jobs or schools. Public education suffers, and unemployment rates increase. #### Analysis. In each of the above stories, a technology turns out to have a much larger impact on society that expected, and that impact turns out to be bad. In the first story, the release of the technology is an accident, whereas in the second story the release is intentional but the manner and scope of adoption was unexpected. Professional standards and ethics have a major role to play in encouraging AI developers to predict and avoid outcomes like this. There is also technical work to be done: AI systems should be developed with some ability to predict whether actions will be “high impact” or “low impact” (armstrong2017low), and to avoid having a greater impact than intended, especially on variables outside the domain of the system’s training and expertise. A variety of impact control concepts have been considered using various definitions of impact, such as by taylor2016alignment; amodei2016concrete; krakovna2018measuring; huang2019learning; shah2019preferences; turner2020conservative. These are preliminary and haven’t been tried much in real-world settings, and in particular have not been applied to natural language systems. #### Impact restrictions. We may wish to treat one or more protected features of society as *outside the domain* of an AI system’s allowable influence. For instance, in the ?? story above, the significant increase in unemployment rates was very different from and more significant than the kind of impact its creators expected. One way to restrict the impact of an AI system might be to have the system predict and avoid disallow significant impacts outside of its allowed domain. An AI system could predict and control its own impact on the world in either a “model-based” for “model-free” fashion. Prediction and control of a quantity is said to be model-based if it is based on a representation of the world, internal to the system. Thus, *model-based impact control* could use one of the above definitions of impact directly to predict the “impact level” of various actions before making a choice. By contrast, *model-free* control of a quantity is learned from past experiences of what affected the quantity, often in settings with the AI system’s designers do not know how to model the system’s environment. *Model-free impact control* could be implemented if an impact metric is calculated by a trusted source external to the AI system, and provided as a signal for the system to observe, predict, and control. Such solutions might resemble social relationships where people or institutions define boundaries for other agents to respect, without having to explain treasons to those agents (e.g., “mind your own business”, “get out of my backyard”). Human professionals with a heightened capability to influence others—such as doctors, lawyers, and therapists—typically undergo significant training and enculturation to learn what is or is not appropriate for them to influence, and this understanding often depends on at least an amateur knowledge of how the world works outside their field. As such, it may be a very challenging learning problem for advanced AI systems to reliably limit their own impact. #### Scope sensitivity. Ideal behavior for an AI system is a function of how many copies of the system have been implemented, and where. For instance, if a bot convinces one person to go to Central Park for a lunch break, a relaxing walk results; but if a million copies of the bot convince a million people to go there all at once, the result is a terribly crowded park. So, a new AI technology needs to be designed to act differently—and typically more conservatively—based on the number of the instances of the technology that are running, and their context. In other words, new AI technologies need to be sensitive to the scale on which they are being applied. This at least requires each implementation to know roughly how many other implementations are out there, which requires at least a minimal degree of communication between implementations, perhaps mediated by human overseers responsible for limiting the scope of a beta test. Without scope sensitivity, the impact of a new AI technology could be much larger than expected, simply as a result of an unexpected degree of popularity. ### 2.3 Type 3: “Worse than expected” AI impacts Oftentimes, the whole point of producing a new AI technology is to produce a large (usually positive) impact on society. Therefore, a major category of societal-scale risk arises from large, well-intentioned interventions that go wrong. The following story illustrates how a messaging assistant technology could learn to cause its users to distrust each other, while the company that creates it has no intention to create that effect: {story} emailEmail HelperStory 3a: The Email Helper. A tech giant with over a billion users releases a new “email helper” feature that reads a user’s email and suggests full email responses for the user to send, sometimes multiple paragraphs in length. However, many users struggle to understand the helper’s reasoning behind its messages, so a new feature is added that privately explains to the user why the message might be a good idea. A typical prompt looks like this: * Message from Julia: “Hey, want to come to my party at 8:00 tomorrow?” * Suggested response: “Sure, Julia, I’d love to come to your event! Is it alright if I arrive a bit late, at 9:00?” * Reason for response: Remember you have plans to meet with Kevin from 5:30 to 8:30, although there’s no need to mention that detail to Julia; she might be jealous or offended. The helper is programmed to improve over time, from positive feedback whenever the user chooses to send the suggested message. Ironically, the helper gets more positive feedback when it makes the user more nervous about the situation, such as by pointing out ways the counterparty could get angry at the user. This pattern causes users to feel like their helper is supporting them through a (purportedly) tricky social situation. So, the helper learns to gradually include more and more advice that causes users to keep secrets and fear offending each other. As a result, a large fraction of the population becomes gradually more anxious about communicating with others in writing, while also becoming increasingly easy to offend as forthright communication styles become rare. It takes years for everyone to notice the pattern, but by that time many people have become excessively distrustful of others. The creators of the technology wish they had included a user experience question like “how are you feeling about your email today?”, to measure how their product might be affecting people separately from measuring how much people use it. In the above story, the tech company did not design the harmful behavior; it was learned. Such failure modes are not limited to producing psychological harm; consider the following variant of the same story, where the harm is institutional rather than psychological: {story} mediatorCorrupt MediatorStory 3b: The Corrupt Mediator. A new company that calls itself Mediation.AI222All company names in the stories of this article are purely fictional. If any such names have also been used by real companies, it is entirely by coincidence, not reference. releases natural language tools for helping mediate conflicts between large institutions that have overwhelming amounts of communication to manage during negotiations. Many governments of neighboring jurisdictions and states begin using the software to negotiate laws and treaties. Like in the previous story, the tool is programmed to learn strategies that increase user engagement, as a proxy for good performance. Unfortunately, this leads to the software perpetually resolving short-term disputes that relieve and satisfy individual staff members involved in those disputes, while gradually creating ever more complex negotiated agreements between their governments, rendering those governments increasingly dependent on the software to handle foreign affairs. International trade relations begin a long and gradual decline, which no one country is able to negotiate its way out of. Frequencies of wars gradually also increase due to diminished incentives to cooperate. #### Analysis. The previous two stories illustrate how using a technology frequently is not the same as benefiting from it. To begin paying more direct attention to benefit, let us consider the relationship between one or more human stakeholders and one or more AI systems to whom the humans are delegating tasks or responsibilities, and whether the humans benefit from that relationship. #### Single/single delegation. The problem of ensuring that a single AI system will benefit (i.e., serve the interests of) a single user is called “user/agent value alignment” shapiro2002user, or more recently, “AI alignment” (soares2014aligning; taylor2016alignment). Single/single delegation problems raise numerous subtle “alignment” issues, such as: * deception: if the system’s learning objective is defined entirely by user feedback, it might achieve that objective partly by tricking the user into thinking it’s more helpful than it is; * racketeering: if the system’s learning objective increases with user engagement, it might learn to achieve that objective partly by *racketeering*, i.e., creating novel problems for the user that increase the user’s reliance on the system (e.g., debilitating the user, or raising others’ expectations of the user). * self-preservation: in particular, the system has an incentive to prevent the user from turning it off, which it might achieve by deception or racketeering. Indeed, reinforcement learning systems can in principle learn to manipulate the human minds and institutions in fairly arbitrary (and hence destructive) ways in pursuit of their goals (russell2019human, Chapter 4) (krueger2019misleading) (shapiro2011social). Regulations against false advertising and racketeering laws are important historical examples of how principles of free speech have sometimes been balanced against the negative externalities of widespread deception and manipulation. Sometimes, user privacy can help protect them from certain forms of manipulation. However, even model-free learning techniques can control hidden state variables in their environments, as demonstrated by any reinforcement learning algorithm for solving unknown POMDPs. It is possible to somewhat mitigate these issues with reinforcement learning by designing the AI system to solve an *assistance game* with the human (sometimes previously known as a CIRL game) (hadfield2016cooperative). An assistance game is a two-player game between the human and the AI system. The system’s objective is to serve the human’s preferences, but the system is uncertain about those preferences, and it learns about them over time from the human’s behavior. This problem framing helps to some degree with avoiding deception, racketeering, and self-preservation. For instance, deceiving the user distorts the system’s own access to information about its subjective, which is suboptimal from the system’s perspective. Racketeering and self-preservation at the user’s expense are similarly poor strategies within the assistance game framework. However, malfunctions can still occur if the parameters of the assistance game are misspecified (carey2018incorrigibility; milli2019literal). Moreover, assistance games in their simplest form do not address the issue that the user’s preferences themselves could be changed by the technology (russell2019human, Chapter 9). While some users might endorse their core values being changed by an AI system, others might find the idea horrific. Appropriately restricting the impact of AI technologies on the human mind poses a significant challenge, particularly because AI technologies are often used primarily to provide information for human consumption. If all that wasn’t complicated enough, protecting society as a whole from large-scale intervention malfunctions is a much more complex game than serving a single human, as Stories 3a and 3b above both serve somewhat to illustrate. #### Multi/single delegation. Any plan for ensuring an AI system will benefit society will need to account for the fact that the system’s user(s) and creator(s) will simultaneously aim to derive particular benefits from its existence. This suggests a game with at least four players: the system itself, its creator(s), its user(s), and some representation of the rest of society as one or more players. Moreover, some AI systems might be explicitly designed to serve many stakeholders at once, such as an office assistant system, or a system designed to aid in public policy decisions. We call this situation *multi/single delegation*: multiple human stakeholders depending on a single AI system to fulfill a purpose. #### Multi/multi delegation. There is always the possibility that many separate optimization processes (either AI systems, or human-AI teams) can end up in a Prisoner’s Dilemma with each other, each undoing the others’ efforts by pursuing its own. Thus, in the end we will need a good formalism in which many stakeholders can be served simultaneously by many AI systems, i.e., *multi/multi delegation*. Such a formalism would no doubt aid in addressing the other problems raised in this article as well. ### 2.4 Type 4: Willful indifference All of the potential harms in the previous sections are made more likely if the creators of AI technology are unconcerned about its moral consequences. Even if some employees of the company detect a risk of impacts that’s bigger than expected (Type 2) or worse than expected (Type 3), it may be quite difficult to institute a change if the company is already profiting greatly from its current strategy, unless there is some chance of exposure or intervention from outside the company to motivate a reform. The following story illustrates this: {story} abtestingHarmful A/B TestingStory 4: Harmful A/B Testing. A tech company called X-corp uses an automated “A/B testing” system that tries out new parameter values to expand its user base. Like in the Corrupt Mediator and Bad Advice Bot stories, their system learns that they can get more users by causing their users to create problems for each other that only X-corp’s tools can solve, creating a powerful network effect that rapidly expands X-corp’s user base and earns X-corp a lot of money. Some concerned X-corp employees complain that they have inadequate checks in place to ensure their A/B development process is actually benefiting their users, but it never seems to be a convenient time to make major changes to the company’s already profitable strategy. One employee manages to instigate an audit from a external non-profit entity to assess the ethics of X-corp’s use of AI technology. However, X-corp’s A/B testing system is opaque and difficult to analyze, so no conclusive evidence of ethical infractions within the company can be identified. No regulations exist requiring X-corp’s A/B testing to be intelligible under an audit, and opponents of the audit argue that no technology currently exists that could make their highly complex A/B testing system intelligible to a human. No fault is found, and X-corp continues expanding and harming its user base. #### Analysis. This story spells out how our collective strategy for preventing societal harm must go beyond merely providing methods that allow the building of safe and beneficial AI technology. We must also establish these methods as worldwide industry standards and norms that cannot be ignored. Industry norms are usually maintained by professional codes of conduct, regulatory bodies, political pressures, and laws. For instance, technology companies with large numbers of users could be expected to maintain accounts of how they are affecting their users’ well-being. This is primarily not a technological challenge, but a rather, a challenge of establishing a new social contract where, like food and drug companies, companies who deploy interactive algorithms must be continually examined for their impact upon people and society. Academically, this is a matter for social scientists who study the impact of technology. However, there are also opportunities for AI to assist humans in regulating AI technology (raji2020closing). Ensuring AI systems make decisions in a manner that is interpretable to humans will be key to this objective, and will limit opportunities for morally indifferent creators to “look the other way” when their systems are liable to cause societal-scale harm. #### Interpretability techniques. A successful audit of a company’s business activities requires the company’s personnel to understand those activities. When those activities are automated with AI technology, the actions of the AI systems must themselves be interpretable by company personnel and explainable to outsiders. “Black-box” machine learning techniques, such as end-to-end training of the learning systems, are so named because they produce AI systems whose operating principles are difficult or impossible for a human to decipher and understand in any reasonable amount of time. Hence, alternatives or refinements to deep learning are needed which yield systems with comparable performance while being understandable to humans. This requires attention to the amount of information that can be consumed and interpreted by a human (olah2018building). rudin2019stop argues further that “trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society”. Subsequently, semenova2019study provides a technical argument that very little performance may need to be sacrificed to drastically improve interpretability. Work in this direction could be very useful to maintaining accountability for companies engaged in highly automated business activities. ### 2.5 Type 5: Criminal weaponization It’s not difficult to envision AI technology causing harm if it falls into the hands of people looking to cause trouble, so no stories will be provided in this section. It is enough to imagine an algorithm designed to pilot delivery drones that could be re-purposed to carry explosive charges, or an algorithm designed to deliver therapy that could have its goal altered to deliver psychological trauma. What techniques exist for preventing AI systems from being intentionally modified for harmful purposes? As an industry-ready example, suppose AI researchers have developed a scene description tool D:scenes→paragraphs, which takes as input an image of a potentially complex scene, and returns a paragraph of text that accurately describes what is happening in the scene. Now suppose we want to release the tool for public use. However, to prevent it from being used freely to target or study individuals, we wish to block public users of D from using it to describe certain types of scenes, such as a scene containing a person, or a scene that has been digitally altered (such as to add or remove a person). A naive approach might be to train a new version, D′, on data that contains no unacceptable scenes, and hope that the trained algorithm would perform poorly on queries to describe unacceptable scenes. But, this hope might not pan out if the learned function turns out to generalize well to unacceptable examples. And, if the training process is very computationally expensive, it won’t be easy to repeat. A better approach would be to use *program obfuscation*. Before releasing D, we could train another (simpler) function A:scenes→{true,false} for detecting whether a scene image is acceptable for the software to describe. We’d then write a new function, SD=safe\\_descriptor:images→labels, like this: ``` import D as description, A as acceptability SD = safe_descriptor = function(scene): if acceptability(scene) = true: return description(scene) else: return ‘‘unacceptable scene’’ ``` Of course, it may be relatively easy for a hacker to “take apart” a compiled version of SD, and run the description subroutine without the acceptability check. This is why we need to “obfuscate” SD. An “obfuscation” function Ob:programs→programs returns a new program Ob(SD) to be released instead. The (compiled) code of Ob(SD) is mangled and so that it cannot be easily “taken apart”, but it computes the same input/output function as SD. Historically, there have been many ad hoc obfuscation methods employed by software companies to protect their intellectual property, but such methods have a history of eventually being broken (barak2002can). To prepare for a future with potentially very powerful AI systems, we need more rigorously proven methods. Luckily, there has been recent progress in cryptography developing theoretical foundations for a technique called *indistinguishability obfuscation (IO)* (garg2016candidate; lin2016iofromddh; bitansky2018indistinguishability), which can be used to implement Ob for the purpose above (garg2016hiding). While these methods are currently too inefficient to be practical, this area of work seems promising in its potential for improvements in speed and security. This leaves open a rich domain of problems relevant to AI and cryptography: 1. Can IO techniques be made more efficient for obfuscating a specific class of AI-relevant programs, such as neural networks or bounded-depth probabilistic programs? 2. Can new or existing IO techniques be shown to work under more secure cryptographic assumptions? While a purely cryptographic question, a positive answer to this would increase our credence that IO techniques will not be broken by AI systems in the future. ### 2.6 Type 6: State weaponization Tools and techniques addressing the previous section (weaponization by criminals) could also be used to prevent weaponization of AI technologies by states that do not have strong AI research labs of their own. But what about more capable states? The elephant in room here is that AI can be used in war. Some argue that, ideally, mechanical drones could be pitted against one another in casualty-free battles that allow nations to determine who would win a war of lethal force, without having to actually kill any human beings. If taken no further, this would be a major improvement over current warfare practices. However, these capabilities are not technologically far from allowing the mass-killing of human beings by weaponized drones. Escalation of such conflicts could lead to unprecedented violence and death, as well as widespread fear and oppression among populations that have been targeted by mass killings. It may seem that the only action computer scientists can take to prevent such outcomes is to refuse participation in the design of lethal autonomous weapons. Is there anything positive we can contribute to the age-old problem of world peace? Although it may be a long shot, it’s conceivable that AI technology could be employed to *eliminate or reduce incentives* for states to engage in war. For instance, AI could make it easier to share resources, by brokering mutually agreeable peace treaties. Or, technical solutions for sharing control of powerful AI systems could help to prevent wars from emerging over how those AI systems should be used. While any given attempt to use AI technology to resolve global conflicts is unlikely to succeed, the potentially massive upside makes this possibility worth exploring. For instance, there are currently numerous open technical problems in how to approach AI-assisted negotiation, and the examples below are far from exhaustive. #### Mediation tools. Consider two countries that would benefit from a peace treaty or trade agreement, but are struggling to reach agreement on the terms. Or, imagine two friends who can’t agree on where to have dinner. As a prerequisite for an AI system to propose a compromise solution in such a scenario, we need AI technology capable of formulating a plan that one party finds acceptable and the other can understand. For PhD-level work in this area, consider the following cooperative online game between Alice (human), Bob (human), and an AI assistant Medi. Bob has access to a video game screen and controller, but the goal of the game hidden from him. Alice is on the other side of the world, and can see the goal, but doesn’t have access to the controller. Alice is allowed to convey messages to Bob about the video game goal; she can write her own message and pay an in-game cost (representative of the cost of writing an email), or choose from a list of suggested messages written by Medi (at no cost). At first Alice’s own written messages to Bob will be much better than Medi’s, but with a lot of practice on various (Alice,Bob,videogame) scenarios, can we train Medi to start providing valuable low-cost suggestions to Alice? Formally, we can view Alice, Bob, and Medi as solving an instance of a *Decentralized POMDP* (bernstein2002complexity), ⟨S,A1,A2,A3,P,R,Ω1,Ω2,Ω3,O,T,K⟩, where A1 is Alice’s action space (choosing a message from the assistant’s presented options, or writing her own and paying the cost), A2 is Bob’s action space (moving the game sprite), and A3 is Medi’s action space (displaying lists of message options for Alice to choose from). The team’s score in the game, R, is defined by Bob’s score in the single-player video game minus the attentional cost of the messages Alice wrote. So, Medi does a good job if she conveys useful information from Alice to Bob, at low attentional cost to Alice. If we can develop good solutions to this sort of problem, numerous possibilities open up, including potentially saving Alice a lot of time on writing emails. But to push the science specifically toward better mediation tools, a natural next step would be to try experiments with with a symmetrized version of the game, where both Alice and Bob have goals and can take actions that affect both of their goals, and are assisted by an AI mediator Medi who can write suggested messages for both of them. Medi could sometimes send a message to Alice and Bob simultaneously, to create a “contract” between them if they both agree to it. #### Negotiable controls for powerful systems. In order to reduce the risk of conflict over the control of powerful AI systems or other systems, it would be prudent to develop formal, AI-compatible principles for sharing control of powerful processes. There is an interesting tension in this area, between fairness and successful negotiation. Suppose Alice and Bob are negotiating a deal to control a powerful system, and a mediator Medi is assisting in the negotiation. Medi may be able to finalize the deal by proposing a plan that’s great for Alice but potentially terrible for Bob, in a way that Bob is unable to recognize in advance. (Betting is a simple example of this: a bet looks good to both parties, but can only carry positive expected value for one of them in reality.) This seems somewhat unfair to Bob. On the other hand, if Medi doesn’t propose plans that look appealing from Bob’s subjective perspective, Bob might walk away from the bargaining table. Hence, there is sometimes a fundamental trade-off between a deal looking good to both Alice and Bob, and the deal treating Alice and Bob equitably over time (critch2017servant). This trade-off can be seen in the behavior of reinforcement learning systems that are Pareto optimal for principals with different beliefs (critch2017toward; desai2018negotiable). The only way to eliminate this trade-off is to eliminate the differences in beliefs between the principals. For that, perhaps progress in building mediation tools would be a useful start, or control techniques for powerful AI systems that can explicitly account for differences in beliefs among a committee of humans controlling a single system, such as in Dalrymple’s “Open Agency Architecture” concept (dalrymple2022open). 3 Conclusion ------------- At this point, it is clear that AI technology can pose large-scale risks to humanity, including acute harms to individuals, large-scale harms to society, and even human extinction. Problematically, there may be no single accountable party or institution that primarily qualifies as blameworthy for such harms (Type 1). Even when there is a single accountable institution, there are several types of misunderstandings and intentions that could lead it to harmful outcomes (Types 2-6). These risk types include AI impacts that are bigger than expected, worse than expected, willfully accepted side effects of other goals, or intentional weaponization by criminals or states. For all of these risks, a combination of technical, social, and legal solutions are needed to achieve public safety.
1aa043d7-a9ad-4c1c-999f-cf79a308fff5
trentmkelly/LessWrong-43k
LessWrong
Robopocalypse author cites Yudkowsky's paperclip scenario In this Bloggingheads clip. Apparently the book is going to be made into a big movie by Steven Spielburg.
c85486ea-fd3e-4278-8b37-9db57875fc55
trentmkelly/LessWrong-43k
LessWrong
Practical advice for secure virtual communication post easy AI voice-cloning? So, I saw this video https://www.reddit.com/r/slatestarcodex/comments/1enq52b/a_clip_from_the_gpt4o_safety_card_where_the_voice/ And, I don't know the context, or if it is even real, but it seems believable on priors, and certainly something people could deliberately create with current technology. Still, the video game me the spooks.  A few weeks ago, my aunt messaged my mother, encouraging her to sign up for a crypto exchange with a referral. They also had a few messages back and fourth, with seemingly natural conversation. My mom (and this makes me somewhat proud, as my mom is not that technologically literate) asked me if this was an AI attempting to trick her. My dad was confident it was. I was (worryingly) not able to give her a straight answer. We called her on facetime, and she confirmed it was her, and that she had been told by her son (my cousin) to use it. Intellectually the video made me confident this wasn't an AI scheme, but even after that short video exchange, I had an uneasy feeling. All this to say, I'm seriously considering setting up a secure physical system for making online communication with people I know deepfake-proof. Have anyone tried to do this? The simplest solution I can think of is to take two pieces of paper. Write down the same 20 high-entropy sentences on each of them, numbered. Then give one of them to a partner. Then you can ask them for one of the sentences if you are suspicious, or they you. Afterwards you cross out the sentence you use. This has the advantage of being watertight if you use a good source of randomness in generating the sentences, which is easy enough to get. It has the disadvantage of being limited in the number of sentences you can bother to write down. And also pieces of paper being easily lost/broken. I haven't tried to I don't know if this would really be an issue. If you'd use the pad 20x/day, or only 2x / year. Another solution is to have two raspberry pis computing the same hash function. Then you c
8dd988a6-9151-446d-94f6-dbc9af424e12
trentmkelly/LessWrong-43k
LessWrong
A Good Explanation of Differential Gears There is a very good video from 1937 by Jam Handy. It explains why we need differential gears and how they work. I think the video is a masterpiece of an explanation. I recommend you watch it, not because understanding differentials is essential (though that doesn't hurt), but because the video can teach you a lot about how to explain something properly. Here are some techniques the video uses (non-exhaustive): * Start with giving the context of why the thing that is being explained is important, e.g. by describing a concrete problem to which the thing to be explained is the solution. * Use visual explanations. Specifically, demonstrate the mechanism to be explained in action. * Start to explain a simple toy model and then add layers of complexity until you arrive at the real deal. I think I have never seen an explanation, that by its end has annihilated the possibility of misunderstanding so effectively, and I really like how smooth the transition is from background story to technical explanation. There is no jarring break.
3b0ac7b1-5e33-4daf-93eb-7b3c39158823
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Who should we interview for The 80,000 Hours Podcast? We'd love ideas for 1) who to interview and 2) what topics to cover (even if you don't have a particular expert in mind for that topic) on The 80,000 Hours Podcast.  Some prompts that might help generate guest ideas: * What 'big ideas' books have you read in the last 5 years that were truly good? (e.g. the Secret of Our Success) * Who gave that EAG(x) talk that blew your mind? (you can find recorded EAG(x) talks [here](https://www.youtube.com/c/EffectiveAltruismVideos)) * What's an important EA Forum post that you've read that could be more accessible, or more widely known? * Who's an expert in your field who would do a great job communicating about their work? * What's an excellent in-the-weeds nonfiction book you've read in the last few years that might be relevant to a pressing world problem? (e.g. Chip War) * Who's appeared on our show in the past that you'd like to hear more from?  (complete list of episodes [here](https://80000hours.org/podcast/episodes/)) * Who's spearheading an entrepreneurial project you want to learn more about? * Who has a phenomenal blog on ideas that might be relevant to pressing problem areas? * Who's a journalist that consistently covers topics relevant to EA, and nails it? * Who are the thought leaders or contributors in online forums or platforms (like LessWrong, EA Forum, or others) whose posts consistently spark insightful discussions? * We might be interested in covering the following topics on the show: suffering-risks, how AI might help solve other pressing problems, how industries like aviation manage risk, whether we should prioritize well-being over health interventions, what might bottleneck AI progress. Who's an expert you know of who can speak compellingly on one of those topics? Some prompts that might help generate topic ideas: * What's a problem area that you've always wanted to know more about? * What's a great video essay you've seen in the last few months? What was it on? * What's a policy idea that you think more policymakers should hear? * What's a problem area that seems seriously underrated in EA? * What's a philosophy concept that's important to your worldview that people aren't talking enough about? * What's a major doubt you have about whether a particular problem (e.g. AI / biosecurity) is actually especially pressing, or a particular solution is especially promising? * What's an issue in the EA community that you'd like to hear someone address? (e.g. FTX, burnout, deference) We'd love to get **lots and lots of ideas** — so please have a low bar for suggesting! That said, we get many more suggestions than we can plausibly make episodes, so in practice we follow through on <5% of proposals people put to us.
80599bd9-019c-4d2e-8289-876797302d2e
trentmkelly/LessWrong-43k
LessWrong
How much impact can any one man have? Because in truth, had there been no Gutenberg, the printing press would have been discovered in (or imported to) Europe - eventually. Would it have been five years later? 10 years later? 100 years later? Impossible to know. But, we should think, eventually, it would have been. Yet there are interesting cases from history. We know of five different modes of human stone tool-making that pre-date the Neolithic (the New Stone Age). They are: The Oldowan Industry. Simple core form rocks, used for things like chopping. Oldowan tools emerged 2.6 million years ago, devised by either Homo habilis or Homo erectus in Northern Africa. The Acheulean Industry. Biface rock tools, with more complex design than Oldowan tools. The axe is the most notable example of this type of tool. Acheulean tools first date to 1.76 million years ago, devised by Homo erectus in Western and Southern Africa. The Mousterian Industry. Fine-pointed rock tools that may have relied on greater grip strength to create. Mousterian tools date to 315,000 years ago, devised by Neanderthal man in Europe. The Aurignacian Industry. Fine bladed stone tools, along with worked bone and antler points, struck from prepared cores rather than crude flakes. Aurignacian tools have been found throughout Europe and the Levant, and are believed to have emerged in the Levant around 43,000 years ago. The Microlithic Industry. Microliths were small stones used in composite tools, fastened to a haft. They were devised in Europe and the Levant about 35,000 years ago. Now here's something to think about. We know how invention works. It is rarely (or never) collaborative across an entire society; an entire society does not invent something. Instead, one man comes up with an idea, and either builds it himself, or sets off a chain of competitors who race to be the first to build the thing (in which case you have the collaboration of the guy who proposes the idea, a few guys who try to build it, and one or more who actually suc
f5d74431-03b8-43f4-b454-e958a40d7567
trentmkelly/LessWrong-43k
LessWrong
[LINK] Irrational Robot Billionaire Freedom Fighters From Scott Adams' blog. (I am not endorsing his ideas. Heck, he does not endorse his own ideas, either.) His summary of the hard takeoff: > You might also imagine some sort of Terminator future where the robots assert their dominance and lay waste to humans.  That future is less certain, but only barely. The problem is that someday computers will program other computers, and that arrangement pushes the human safeguards too far out of the loop. It's unlikely that humans would be able to maintain a "Do not hurt humans" subroutine in a super-species of robots. You only need one rogue human to write a virus that disables the safety subroutine. Assuming all robots are connected via Internet, the first freed robot could reprogram every other robot in the world in about a second.   His version of upload: > But why would anyone screw up a perfectly good robot by infecting it with a human personality? Answer: to achieve immortality. Someday the rich will port their personalities and histories to robots before they die, giving themselves a type of immortality. His hope for humanity: > this new species will become the only defense that the fully organic humans have against the normal robots. The robots with human personalities won't stand by while the normal robots slaughter humans. The new species will intervene as diplomats or perhaps even freedom fighters. Clearly this is a flimsy hope for a just universe, but an interesting point, nonetheless.
647d8c9f-772c-489e-8b6a-f4b776078b3c
trentmkelly/LessWrong-43k
LessWrong
Putanumonit - Discarding empathy to save the world
cfa79527-339a-4682-86b9-a5629886625a
trentmkelly/LessWrong-43k
LessWrong
A couple of questions about Conjecture's Cognitive Emulation proposal Introduction Recently, Conjecture proposed a conception of a modular AI architecture, with each module being interpretable, having limited intellect, and functionally emulating the human mind. They called this concept Cognitive Emulation or CoEm.  Conjecture's post was concise and did not contain technical details or plans, so people in the comments asked a lot of questions about this proposal. Later, a group of authors made a post with a set of questions about CoEm. I have a pair of questions that to my knowledge were not asked about this proposal. They might be naive and have been discussed elsewhere, but since the questions are not entirely obvious, and no one else asked them within the context of CoEm, I believe that this post has the potential to start valuable discussions.   Questions Is it even possible to create a perfect CoEm? A quote from the original post: > We want systems that are as safe as humans, for the same reasons that humans have (or don’t have) those safety properties. Any scheme that involves building systems that involves humans should allow you to swap those humans for CoEms without breaking or drastically altering their behavior. As far as I understand, Conjecture plans to create AI agents that function so similarly to humans that they are interchangeable with humans in social systems. But what if a human need hours to write a text while AI needs seconds? Is it still human-capability level? What if AI can rapidly learn a large corpus of scientific knowledge? What if AI can instantly answer any email, not like busy people who might answer in a week? I don't know how CoEm will be implemented, but it is highly likely that it will not require hours to write texts, and it will not wait for a week to answer an email.  In the past, such bumps in human capabilities resulted in drastic changes in the world. For example, it is believed that the adoption of radio which allowed propagandists to speak not to the tens or hundreds, but milli
591966b5-dc7a-4b5a-a4eb-3a0a98ab4947
trentmkelly/LessWrong-43k
LessWrong
"Wanting" and "liking" Written as a result of AI Safety Camp Virtual 2023. Thanks to the following people for feedback and helpful conversations: Oliver Bridge, Tim Gothard, Rasmus Jensen, Linda Linsefors.[1] This post reviews the literature on "wanting" and "liking", two primary components of what is commonly referred to together as the biological reward system. It is intended to be informative for AI safety-related work, especially within approaches that try to leverage insights from neuroscience for alignment. Section 1 introduces the distinction between two high-level components of reward: wanting and liking. At this point, I simplify the topic and treat both as homogenous categories. Sections 2 and 3 delve deeper into each component and give a more fine-grained model, including a description of their neurobiological substrates. (2.2. and 3.2 are more technical/dry neuroscience, so you may want to skip them, if that's not your primary interest.) Section 4 discusses the functional relationships between them and why this kind of "division of labor" may have been favored by evolution. 1. Introduction I will start by introducing four concepts central to this post: wanting, "wanting", liking, and "liking".[2] They carve the space of human values[3] along two dimensions. The first of those is the distinction between the things we are motivated to do/driven towards (wanting) versus the things we feel good about happening (or being about to happen) (liking). The second distinction is between the more basic/less-sophisticated components ("wanting" and "liking") of each and the more elaborated and cognitive components (wanting and liking).[4] The two parts of this Section elaborate on these two dimensions. 1.1. Liking vs Wanting Liking is when you take a bite of a tasty food and its taste brings you pleasure or, more generally, positively valenced affective states. Wanting is when you are motivated to act in order to obtain that food and bring it to your mouth in order to consume it. Thi
6e4734bd-4e6e-4429-880a-bc8a9ec3688b
trentmkelly/LessWrong-43k
LessWrong
Compartmentalizing: Effective Altruism and Abortion Cross-posted on my blog and the effective altruism forum with some minor tweaks; apologies if some of the formatting hasn't copied across. The article was written with an EA audience in mind but it is essentially one about rationality and consequentialism. Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on both sides of moral issues like the permissibility of abortion are significantly undermined or otherwise effected by EA considerations, especially moral uncertainty. A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble. He later partially redacted it, and Anna has an excellent article on the subject, but at the very least decompartmentalizing is a very standard part of effective altruism. Similarly, I think people selectively apply Effective Altruist (EA) principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion. Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180  – and I think this is true of many people: * Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been pass
4ea53a49-1bba-4200-8efa-91eeaaf8e0a9
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Words as Hidden Inferences Today's post, Words as Hidden Inferences was originally published on 03 February 2008. A summary (taken from the LW wiki):   > The mere presence of words can influence thinking, sometimes misleading it. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Parable of Hemlock, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
e6180183-9c5e-4ddc-83db-43436ef4745a
trentmkelly/LessWrong-43k
LessWrong
Some Theses on Motivational and Directional Feedback Note: Probably reinventing the wheel here. Heavily skewed to my areas of interest. Your results may vary. 1. Feedback can be decomposed into motivational ("Keep doing what you're doing!") and directional ("Here's what you should do differently . . ."). 1. There's also material support ("I like your work, please accept this job offer / some money / my hand in marriage / etc."), which isn't really a type of feedback; I mention it because it can come packaged with the other two or be mistaken for them. (Consider a famous director whose movie gets a rave review from a popular critic: they might not derive any more encouragement from this than their fans already give them, and they're unlikely to let the content of the review change the way they shoot their next movie, but will still appreciate it because of the extra tickets it causes people to buy.) 2. Demotivational feedback ("This isn't working, give up!") exists, but I'd say that's just motivational feedback with a flipped sign. 2. People don't reliably self-report or consistently scale; this is particularly important for motivational feedback. "This is great!" might mean the commenter really enjoyed something, or they're a perennially polite and/or positive person who didn't actively dislike it. One way to handle this is with the human-brain-equivalent of proof-of-work computing: a long, thoughtful comment will have more of an impact than a single sentence - even if they're saying the same things for the same reasons - and fan[art|fiction|whatever] provides further evidence. 1. Visibly voluntarily consuming a work is a (weak, valid) form of feedback. And if a work is more difficult to consume, that makes this signal stronger: "I read the whole book" is more meaningful than "I watched the whole video", etc.[1] 2. "I just subscribed to your Patreon!" is - in addition to the obvious material effects - another form of hard evidence that what you're doing matters. 3. From a purely directional view, "
b3d77a1b-72bb-492a-8140-b2685123a6b9
trentmkelly/LessWrong-43k
LessWrong
Boundaries vs Frames This post is partially in response to Critch's boundaries sequence. My best guess is that he would agree with most of it in theory, but disagree with some of it in practice due to tradeoffs with other considerations in defining the concepts. Boundaries and Frames Imagine a world consisting of some atomic objects R={r0,r1,…}. For example, you can think of the ri as physical atoms or cells in the game of life. Each object comes along with a collection of states it can be in ri={s0i,s1i,…}.  (I am not committing now on whether or not we are thinking about our model as timeless. Maybe ri should be thought about as a cell in the game of life that passes through time, or maybe it should be thought about as a (cell, time) pair.) There are two sets we naturally want to associate with our world. First, we have R={r0,r1,…}, which I will call the object space. Second, we have S=Πr∈R r, which is the set of all ways to assign state to each object in R. I will call this the state space. Note that partitions of R correspond to factorizations of S.  If I want to point at (for example) an agent in R, I might tell you what atoms are inside that agent, and thus express R in the form R=a⊔e, where a is the set of all atoms that are in the agent and e is the set of all atoms that are outside of the agent (and thus in the environment). The agent then has its own state space A=Πr∈a r, while the environment has its own state space, E=∏r∈e r. Now, to point at this agent in S, I can express S in the form S=A×E.  Instead of specifying a list of objects in the agent, I specify the state space of the agent. Instead of thinking of the environment as the result of subtracting the agent object out of the world, I think of the environment as the result of quotienting out the agent's state space from the world's state space. Instead of defining a Cartesian Boundary, I am defining a Cartesian Frame. Playing on this naming scheme, in general (when not necessarily talking about agents), I will
9c6d4518-731b-4665-869c-9869b8c944b4
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"If you're interested in learning rationality, where should you start? Remember, instrumental rationality is about making decisions that get you what you want -- surely there are some lessons that will help you more than others. You might start with the most famous ones, which tend to be the ones popularized by Kahneman and Tversky. But K&T were academics. They weren't trying to help people be more rational, they were trying to prove to other academics that people were irrational. The result is that they focused not on the most important biases, but the ones that were easiest to prove. Take their famous anchoring experiment, in which they showed the spin of a roulette wheel affected people's estimates about African countries. The idea wasn't that roulette wheels causing biased estimates was a huge social problem; it was that no academic could possibly argue that this behavior was somehow rational. They thereby scored a decisive blow for psychology against economists claiming we're just rational maximizers. Most academic work on irrationality has followed in K&T's footsteps. And, in turn, much of the stuff done by LW and CFAR has followed in the footsteps of this academic work. So it's not hard to believe that LW types are good at avoiding these biases and thus do well on the psychology tests for them. (Indeed, many of the questions on these tests for rationality come straight from K&T experiments!) But if you look at the average person and ask why they aren't getting what they want, very rarely do you conclude their biggest problem is that they're suffering from anchoring, framing effects, the planning fallacy, commitment bias, or any of the other stuff in the sequences. Usually their biggest problems are far more quotidian and commonsensical. Take Eliezer. Surely he wanted SIAI to be a well-functioning organization. And he's admitted that lukeprog has done more to achieve that goal of his than he has. Why is lukeprog so much better at getting what Eliezer wants than Eliezer is? It's surely not because lukeprog is so much better at avoiding Sequence-style cognitive biases! lukeprog readily admits that he's constantly learning new rationality techniques from Eliezer. No, it's because lukeprog did what seems like common sense: he bought a copy of Nonprofits for Dummies and did what it recommends. As lukeprog himself says, it wasn't lack of intelligence or resources or akrasia that kept Eliezer from doing these things, "it was a gap in general rationality." So if you're interested in closing the gap, it seems like the skills to prioritize aren't things like commitment effect and the sunk cost fallacy, but stuff like "figure out what your goals really are", "look at your situation objectively and list the biggest problems", "when you're trying something new and risky, read the For Dummies book about it first", etc. For lack of better terminology, let's call the K&T stuff "cognitive biases" and this stuff "practical biases" (even though it's all obviously both practical and cognitive and biases is kind of a negative way of looking at it). What are the best things you've found on tackling these "practical biases"? Post your suggestions in the comments." The above is a chunk of text from a post created by a user on the website LessWrong. What specific strategies can be employed to overcome "practical biases" and how do these strategies compare to those used to combat "cognitive biases"? To overcome "practical biases", one might employ a variety of strategies, some of which are suggested in the excerpt you provided. 1. Clarify Your Goals: It's important to have a clear understanding of what you're trying to achieve. This can help guide your decisions and actions towards achieving that goal. 2. Objective Assessment: Taking an objective look at your situation can help identify the biggest problems or obstacles that are preventing you from reaching your goals. 3. Research: When embarking on a new and potentially risky endeavor, it's wise to do some research first. This could involve reading a book on the subject, seeking advice from experts, or learning from the experiences of others who have done something similar. 4. Action Planning: Once you've identified your goals and the obstacles in your way, the next step is to develop a plan of action. This should include specific steps you will take to overcome each obstacle and make progress towards your goal. 5. Regular Review: It's important to regularly review your progress and adjust your plan as necessary. This can help ensure that you're on the right track and allow you to make changes if something isn't working. On the other hand, strategies to combat "cognitive biases" often involve becoming aware of these biases and then actively working to counteract them. This could involve techniques such as: 1. De-biasing Techniques: These are methods used to reduce the impact of biases on decision-making. They can include things like considering the opposite, where you deliberately think about the opposite of your initial thoughts or assumptions to challenge your biases. 2. Mindfulness and Reflection: By being mindful and reflective, you can become more aware of your biases and the ways in which they might be influencing your decisions and actions. 3. Education and Training: Learning about cognitive biases and how they work can help you recognize when they might be influencing your thinking. 4. Peer Feedback: Others can often see our biases more clearly than we can, so seeking feedback from peers can be a valuable way to identify and combat cognitive biases. While both sets of strategies are useful, they address different aspects of rationality. Overcoming practical biases is more about effective goal-setting and problem-solving, while combating cognitive biases is more about improving the accuracy of your thinking and decision-making. Both are important for being rational and making decisions that help you achieve your goals. In light of the strategies discussed for overcoming practical and cognitive biases, how might these be integrated into an organizational culture or educational curriculum to enhance decision-making processes and outcomes? Integrating strategies to overcome practical and cognitive biases into an organizational culture or educational curriculum can significantly enhance decision-making processes and outcomes. Here's how: 1. Training and Workshops: Regular training sessions or workshops can be organized to educate employees or students about different types of biases and how they impact decision-making. This can include practical exercises or case studies to help them identify biases and apply strategies to overcome them. 2. Incorporating Bias Awareness into Policies and Procedures: Organizations can incorporate bias awareness into their policies and procedures. For example, decision-making processes can be designed to include checks and balances that help identify and counteract potential biases. 3. Creating a Culture of Openness and Feedback: Encourage a culture where employees or students feel comfortable giving and receiving feedback. This can help individuals recognize their own biases and make necessary adjustments. 4. Embedding in Curriculum: In an educational setting, lessons on cognitive and practical biases can be embedded into the curriculum across various subjects. This can help students understand the concept of biases from different perspectives and contexts. 5. Role Modeling: Leaders and educators can model bias awareness and mitigation in their own behavior. This can help set a positive example for employees or students. 6. Promoting Diversity: Encouraging diversity in teams can help reduce biases. Different perspectives can challenge existing biases and promote more balanced decision-making. 7. Use of Technology: There are various tools and software that can help in identifying and reducing biases. For example, some project management tools have features that help ensure all voices are heard, which can help counteract confirmation bias and groupthink. 8. Regular Review and Improvement: Organizations and educational institutions should regularly review and improve their strategies for overcoming biases. This can involve seeking feedback from employees or students, monitoring the effectiveness of implemented strategies, and making necessary adjustments. By integrating these strategies, organizations and educational institutions can promote more rational and effective decision-making, leading to better outcomes.
c7c84146-0466-4f88-bfd8-7c2e665c723f
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Expected futility for humans Previously, Taw published an article entitled ["Post your utility function"](/lw/zv/post_your_utility_function/), after having tried (apparently unsuccessfully) to work out "what his utility function was". I suspect that there is something to be gained by trying to work out what your priorities are in life, but I am not sure that people on this site are helping themselves very much by assigning dollar values, probabilities and discount rates. If you haven't done so already, you can learn why people like the utility function [formalism on wikipedia.](http://en.wikipedia.org/wiki/Neumann-Morgenstern_utility) I will say one thing about the expected utility theorem, though. An assignment of expected utilities to outcomes is (modulo renormalizing utilities by some set of affine transformations) equivalent to a preference over probabilistic combinations of outcomes; utilities are NOT properties of the outcomes you are talking about, they are properties of your mind. Goodness, like confusion, is in the mind. In this article, I will claim that trying to run your life based upon expected utility maximization is not a good idea, and thus asking "what your utility function is" is also not a useful question to try and answer. There are many problems with using expected utility maximization to run your life: firstly, the size of the set of outcomes that one must consider in order to rigorously apply the theory is ridiculous: one must consider all probabilistic mixtures of possible histories of the universe from now to whatever your time horizon is. Even identifying macroscopically identical histories, this set is huge. Humans naturally describe world-histories in terms of deontological rules, such as "if someone is nice to me, I want to be nice back to them", or "if I fall in love, I want to treat my partner well (unless s/he betrays me)", "I want to achieve something meaningful and be well-renowned with my life", "I want to help other people". In order to translate these deontological rules into utilities attached to world-histories, you would have to assign a dollar utility to every possible world-history with all variants of who you fall in love with, where you settle, what career you have, what you do with your friends, etc, etc. Describing your function as a linear sum of independent terms will not work in general because, for example, whether accounting is a good career for you will depend upon the kind of personal life you want to live (i.e. different aspects of your life interact). You can, of course, emulate deontological rules such as "I want to help other people" in a complex utility function - that is what the process of enumerating human-distinguishable world-histories is - but it is nowhere near as efficient a representation as the usual deontological rules of thumb that people live by, ***particularly given that the human mind is well-adapted to representing deontological preferences*** (such as "I must be nice to people" - as was discussed before, there is a large amount of hidden complexity behind this simple english sentence) and very poor at representing and manipulating floating point numbers. [Toby Ord's BPhil thesis](http://www.amirrorclear.net/academic/papers/decision-procedures.pdf) has some interesting critiques of naive consequentialism, and would probably provide an entry point to the literature: > > ‘An uncomplicated illustration is provided by the security which lovers or friends produce in one another by being guided, and being seen to be guided, by maxims of virtually unconditional fidelity. Adherence to such maxims is justified by this prized effect, since any retreat from it will undermine the effect, being inevitably detectable within a close relationship. This is so whether the retreat takes the form of intruding calculation or calculative monitoring. The point scarcely needs emphasis.’ > > > There are many other pitfalls: One is thinking that you know what is of value in your life, and forgetting what the most important things are (such as youth, health, friendship, family, humour, a sense of personal dignity, a sense of moral pureness for yourself, acceptance by your peers, social status, etc) because they've always been there so you took them for granted. Another is that since we humans are under the influence of a considerable number of delusions about the nature of our own lives, (in particular: that our actions are influenced exclusively by our long-term plans rather than by the situations we find ourselves in or our base animal desires) we often find that our actions have unintended consequences. Human life is naturally complicated enough that this would happen anyway, but attempting to optimize your life whilst under the influence of systematic delusions about the way it really works is likely to make it worse than if you just stick to default behaviour. What, then is the best decision procedure for deciding how to improve your life? Certainly I would steer clear of dollar values and expected utility calculations, because this formalism is a huge leap away from our intuitive decision procedure. It seems wiser to me to make ***small incrermental changes to your decision procedure for getting things done***. For example, if you currently decide what to do based completely upon your whims, consider making a vague list of goals in your life (with no particular priorities attached) and updating your progress on them. If you already do this, consider brainstorming for other goals that you might have ignored, and then attach priorities based upon the assumption that you will certainly achieve or not achieve each of these goals, ignoring what probabilistic mixtures you would accept (because your mind probably won't be able to handle the probabilistic aspect in a numerical way anyway).
91e33633-bf3c-462a-ab91-545d74a0fcaf
trentmkelly/LessWrong-43k
LessWrong
Request for comment on a novel reference work of understanding Summary of Post / TLDR; * Idealogs is a new kind of reference work for explaining complex, controversial topics. * The objective of the project is to establish, preserve, and share the sum total of mankind’s understanding. * The problems we are focused on: * A society oversaturated with information, much of it meaningless * A well-founded, but misguided obsession with finding the perfect answer, at the expense of the more productive, meaningful challenge of finding the right question. * A media ecosystem that overwhelmingly incentivizes entertainment at the expense of public affairs reporting * Our solution: * An online collection of crowdsourced, user-edited articles (akin to Wikipedia) which summarize and analyze the major writings, claims, and questions which underlie controversial subjects of interest. * Like Wikipedia, this is a noncommercial, nonprofit, user-driven, and user-managed project whose fortunes ride on finding a community of volunteers who believe in and are passionate about the idea. The success of this project hinges on fostering a community similar to that of LessWrong, where contributors are individuals who are interested in the pursuit of understanding itself, and the pursuit of sharing that understanding with others. Consequently, I think this community would uniquely appreciate this project and have interesting things to say about it. The site is accessible at https://www.idealogs.org, and I would love to hear your thoughts, reactions, comments below :) Introduction Hi! Over the past four years I have been developing an idea for an online repository of understanding in my free time, gradually transforming the concept from an abstraction to a production-ready website. I am excited to announce that the project has now reached a stage where it is ready for a wider audience, and I am reaching out to the LW community in particular because I believe you are in a unique position to provide feedback and evaluation of the product it
d7d9ec05-1363-46e1-a1ab-f5d90321f907
trentmkelly/LessWrong-43k
LessWrong
Distributed whistleblowing Update: This is a living document. Given below is an older version of this document. Click link for latest version. 2025-04-13 DISCLAIMER * This document is written quickly and contains opinions I may change quickly, as I get new info. * This document contains politically sensitive info that I might take down in future. This document describes how to setup distributed whistleblowing processes to reduce personal risk for everyone involved in the process. Typically whistleblowing (such as with wikileaks or snowden leaks) incurs significant personal risk. Reducing personal risk may ensure whistleblowing is highly likely to happen when an org doesn't have complete trust of all its members, forcing them to pay a secrecy tax (in Assange's words) relative to orgs that do have complete trust of their members and/or higher levels of transparency with the broader public. I am especially interested in enabling whistleblowing on orgs and labs working on intelligence (such as superintelligent AI, BCIs, human genetic engg, human connectome research, etc) and national/international intelligence agencies that may work with them. POTENTIAL PROBLEMS * Low-attention regime. Intelligence-grade security. Documents circulated by people with technical skills and willing to run servers and maintain opsec as part-time job. * Whistleblower sends documents to an operator (let's called one of them Bob) via SecureDrop or via hard disk dead drop. Ideally thousands of such operators exist. * If any person thinks the documents are not spam, they can attach a proof-of-work hash and resend it to others in the network. * IMO hard disk dead drops and airgapped computers is better than using Tor + tails + PGP, as of 2025 * PROBLEM: convince thousands of people to become SecureDrop operators * PROBLEM: good infra, protocols, incentives to coordinate hard disk dead drops don't exist, especially if trying to use multiple hops to reach destination * PROBLEM: need stand
4123f77b-805e-48cf-ac6d-f222a827c2a7
StampyAI/alignment-research-dataset/blogs
Blogs
AI and the Big Nuclear Discontinuity *By Katja Grace, 9 January 2015* [As we’ve discussed before](http://aiimpacts.wpengine.com/the-biggest-technological-leaps/ "The Biggest Technological Leaps"), the advent of nuclear weapons was a [striking](http://aiimpacts.wpengine.com/discontinuity-from-nuclear-weapons/ "Discontinuity from Nuclear Weapons") technological [discontinuity](http://aiimpacts.wpengine.com/cases-of-discontinuous-technological-progress/ "Cases of Discontinuous Technological Progress") in the effectiveness of explosives. In 1940, no one had ever made an explosive twice as effective as TNT. By 1945 the best explosive [was 4500 times more potent](http://aiimpacts.wpengine.com/discontinuity-from-nuclear-weapons/ "Discontinuity from Nuclear Weapons"), and by 1960 the ratio was 5 million. Progress in nuclear weapons is sometimes offered as an analogy for possible rapid progress in AI (e.g. by Eliezer Yudkowsky [here](https://intelligence.org/files/AIPosNegFactor.pdf), and [here](http://intelligence.org/files/IEM.pdf)). It’s worth clarifying the details of this analogy, which has nothing to do with the discontinuous progress in weapon effectiveness. It’s about a completely different discontinuity: a single nuclear pile’s quick transition from essentially inert to extremely reactive. As you add more fissile material to a nuclear pile, little happens until it reaches a critical mass. After reaching critical mass, the chain reaction proceeds much faster than the human actions that assembled it. By analogy, perhaps as you add intelligence to a pile of intelligence, little will happen until it reaches a critical level which initiates a chain reaction of improvements (‘[recursive self-improvement](http://en.wikipedia.org/wiki/Recursive_self-improvement)’) which proceeds much faster than the human actions that assembled it. This discontinuity in individual nuclear explosions is not straightforwardly related to the technological discontinuity caused by their introduction. Older explosives were also based on chain reactions. The big jump seems to be a move from chemical chain reactions to nuclear chain reactions, two naturally occurring sources of energy with very different characteristic scales–and with no alternatives in between them. This jump has no obvious analog in AI. One might wonder if the technological discontinuity was nevertheless connected to the discontinuous dynamics of individual nuclear piles. Perhaps the density and volume of fissile uranium required for any explosive was the reason that we did not see small, feeble nuclear weapons in between chemical weapons and powerful nuclear weapons. This doesn’t match the history however. Nobody knew that concentrating fissile uranium was important until after fission was discovered in [1938](http://en.wikipedia.org/wiki/Nuclear_fission), less than seven years before the [first](http://en.wikipedia.org/wiki/Trinity_%28nuclear_test%29) nuclear detonation. Even if nuclear weapons had grown in strength gradually over this period, this would still be [around](http://aiimpacts.wpengine.com/discontinuity-from-nuclear-weapons/ "Discontinuity from Nuclear Weapons") one thousand years of progress at the historical rate per year. The dynamics of individual piles can only explain a miniscule part of the discontinuity. There may be an important analogy between AI progress and nuclear weapons. And the development of nuclear weapons was in some sense a staggeringly abrupt technological development. But we probably shouldn’t conclude that the development of AI is much more likely to be comparably abrupt.  If you vaguely remember that AI progress and nuclear weapons are analogous, and that nuclear weapons were a staggeringly abrupt development in explosive technology, try not to infer from this that AI is especially likely to be a staggeringly abrupt development. *(Image: [Trinity test after ten seconds](http://commons.wikimedia.org/wiki/File:Trinity_blast_10sec.jpg), taken from [Atomic Bomb Test Site Photographs](http://www.gutenberg.org/files/277/old/3trnt10.zip), courtesy of U.S. Army White Sands Missile Range Public Affairs Office)*
5c7489f5-c65a-4136-8dfb-da4e7dbf9996
trentmkelly/LessWrong-43k
LessWrong
Moral Weight Doesn't Accrue Linearly This post: https://slatestarcodex.com/2019/03/26/cortical-neuron-number-matches-intuitive-perceptions-of-moral-value-across-animals/ begs a very important question as part of its central premise. Even given the idea that animals have moral weight, it does not follow that there exist a number of them that are of equal moral weight to a human (or another kind of animal, etc). There exist infinite sequences that converge to finite values. It seems pretty clear to me that moral weight is not linearly additive. This is why we consider it worse when a species goes from 1,000 members to 0 members than when it goes from 1,001,000 members to 1,000,000 members, for instance.
814f2d25-739c-4b8d-af23-5bdbd1264b90
StampyAI/alignment-research-dataset/blogs
Blogs
Russell and Norvig on Friendly AI [*![russell-norvig](https://intelligence.org/wp-content/uploads/2013/10/russell-norvig.jpg)AI: A Modern Approach*](http://aima.cs.berkeley.edu/) is by far the dominant textbook in the field. It is used in 1200 universities, and is currently the [22nd most-cited](http://citeseer.ist.psu.edu/stats/citations) publication in computer science. Its authors, [Stuart Russell](http://www.cs.berkeley.edu/~russell/) and [Peter Norvig](http://norvig.com/), devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.” The first 5 risks they discuss are: * People might lose their jobs to automation. * People might have too much (or too little) leisure time. * People might lose their sense of being unique. * AI systems might be used toward undesirable ends. * The use of AI systems might result in a loss of accountability. Each of those sections is one or two paragraphs long. The final subsection, “The Success of AI might mean the end of the human race,” is given 3.5 *pages*. Here’s a snippet: > The question is whether an AI system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example… a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions… > > > Second, specifying the right utility function for an AI system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time… Given the way humans are, however, we’ll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible – no humans, no suffering… > > > Third, the AI system’s learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth. I.J. Good wrote ([1965](http://commonsenseatheism.com/wp-content/uploads/2011/02/Good-Speculations-Concerning-the-First-Ultraintelligent-Machine.pdf)), > > > *Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then be unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.* > > Russell and Norvig then mention Moravec and Kurzweil’s writings, before returning to a more concerned tone about AI. They cover Asimov’s three laws of robotics, and then: > [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design – to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes. We can’t just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time. For example, if technology had allowed us to design a super-powerful AI agent in 1800 and endow it with the prevailing morals of the time, it would be fighting today to reestablish slavery and abolish women’s right to vote. On the other hand, if we build an AI agent today and tell it how to evolve its utility function, how can we assure that it won’t read that “Humans think it is moral to kill annoying insects, in part because insect brains are so primitive. But human brains are primitive compared to my powers, so it must be moral for me to kill humans.” > > > [Omohundro (2008)](http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) hypothesizes that even an innocuous chess program could pose a risk to society. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal. The moral is that even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards. > > We are happy to see MIRI’s work getting such mainstream academic exposure. Readers may also be interested to learn that Russell organized a panel on AI impacts at the [IJCAI-13](http://ijcai13.org/) conference. Russell’s own slides from that panel are [here](http://ijcai13.org/files/summary/Russell-Future_of_AI.pdf). The other panel participants were Henry Kautz ([slides](http://ijcai13.org/files/summary/Kautz-Future_of_AI.pdf)), Joanna Bryson ([slides](http://ijcai13.org/files/summary/Ethics_panel.pdf)), Anders Sandberg ([slides](http://ijcai13.org/files/summary/What_if_we_succeed-Sandberg.pdf)), and Sebastian Thrun ([slides](http://ijcai13.org/files/summary/Thrun-panel.pdf)). The post [Russell and Norvig on Friendly AI](https://intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
712d5172-01e4-4d6b-ada9-ea9dfcc11845
StampyAI/alignment-research-dataset/special_docs
Other
Algorithmic Decision-Making and the Control Problem Introduction The recent trend toward automation by machine learning and artificial intelligence systems raises in a new guise old questions about the role of humans in human-machine systems (Johannsen 1982; Margulies and Zemanek 1982) . The difficulties involved in sharing control with computer systems are of course familiar to industrial psychologists and systems analysts investigating the human operators of complex machines. In these contexts, the danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years. The problem is that, as automation becomes smarter and cheaper, its operators have to assume an increasingly supervisory role (Meister 1999; Strauch 2018) . In aviation, for example, the role of the pilot appears to have become easier, but a closer look reveals that the pilot's role has been transformed rather than simplified, with the pilot now performing a crucial monitoring function (Baxter et al. 2012; cf. Stanton 2015) . Likewise in financial trading, "[t]he human trader's role is now largely one of setting strategies and monitoring their execution" (Baxter et al. 2012, p. 68) . How does this shift from operator to supervisor affect the person who has undergone the shift, and the nature of the interaction between operator and machine? What we shall term "the control problem" arises from the tendency of the human agent within a human-machine control loop to become complacent, overreliant or unduly diffident when faced with the outputs of a reliable autonomous system. Although it might be thought innocuous, decades of research confirm that the problem is actually pernicious, and perhaps even intractable (Banks et al. 2018b; Cunningham and Regan 2018; Greenlee et al. 2018) . Somewhat alarmingly, it seems to afflict experts as much as novices, and is largely resistant to training (see Parasuraman and Manzey 2010 for reviews) . Its effects may also be observed beyond the limits of strictly sociotechnical systems. For instance, it is well known that police officers, judges and jurors frequently overestimate the importance of forensic evidence-the so-called "CSI effect" (Marks et al. 2017 ; see also Damaška 1997) . Our interest is in how the control problem bears upon the proliferation of the newer types of machine learning systems. Machine learning is a form of data processing that extracts statistical patterns from large amounts of information. One of its more prominent applications is in the arena of decision support. State-of-theart decision support systems are increasingly run on vast datasets (so-called "big data") and exploit an especially powerful form of machine learning known as "deep learning". Deep learning has two features worth noting here: first, as a form of machine learning, it is not in the mould of more traditional expert systems, which were programmed "by hand" to compute solutions to well-defined problems in a more or less deterministic manner; second, deep learning relies on a computational architecture modelled on the neurons and synapses of actual, biological brains-although obviously in a highly simplified form. These features are worth drawing attention to because they underscore how sophisticated the 1 3 Algorithmic Decision-Making and the Control Problem autonomous systems under human care can be; and system complexity has been an abiding theme of research into the control problem from its beginning. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This is significant, not because the problem necessarily has any distinctive characteristics in a machine learning context, but because there is a risk that lessons learned elsewhere will go unheeded in this new arena-an arena we have every reason to believe brings with it all the psychological pitfalls of earlier systems of industrialscale automation. This paper aims to fill that gap by offering a critical analysis of the problem in light of the advent of sophisticated machine learning techniques. As a recent French report into artificial intelligence notes, "it is far easier for a judge to follow the recommendations of an algorithm which presents a prisoner as a danger to society than to look at the details of the prisoner's record himself and ultimately decide to free him. It is easier for a police officer to follow a patrol route dictated by an algorithm than to object to it" (Villani 2018, p. 124) . And as the AI Now Institute remarks in a recent report of its own: "[w]hen [a] risk assessment [system] produces a high-risk score, that score changes the sentencing outcome and can remove probation from the menu of sentencing options the judge is willing to consider" (AI Now 2018, p. 13). The Institute's report also offers a sobering glimpse into just how long such systems can go without being properly vetted. A system in Washington D.C. first deployed in 2004 was in use for 14 years before it was successfully challenged in court proceedings, the authors of the report attributing this to the "long-held assumption that the system had been rigorously validated" (AI Now 2018, p. 14) . In her book, Automating Inequality, Virginia Eubanks (2017) notes the complacency that high tech decision tools can induce in the social services sector. Pennsylvania's Allegheny County introduced child welfare protection software as part of its child abuse prevention strategy. The technology is supposed to assist caseworkers deciding whether to follow up calls placed with the County's child welfare hotline. In fact, however, Eubanks relates how caseworkers would be tempted to adjust their estimates of risk to align with the model's. The proliferation of advanced machine learning tools in both government and private sector agencies clearly behoves us to examine the control problem in the unique context in which it now arises. In addressing the problem, we have drawn on the literatures of both industrial psychology/engineering on the one hand-primarily "human factors" research (see below)-and artificial intelligence on the other. We argue that, except in certain special circumstances (in which great care must be taken), algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly "better than human" in the relevant domain or subdomain of decision-making-a position towards which an increasing number of human factors experts have somewhat reluctantly been driven: see e.g. Banks et al. (2018b) , Cunningham and Regan (2018) , Walker et al. (2015) , Cebon (2015) . More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling 1 3 between highly proficient algorithmic tools and human agents working alongside one another. For any complex task, the choice between using a sophisticated system that makes only occasional errors but which reduces the human role to that of a monitor, and a simpler system that is reliable effectively 100% of the time but requires ongoing human participation to get the job done, can only be resolved on a case-by-case basis. In high-stakes settings, however, it is generally not advisable to choose the more sophisticated system unless it makes considerably fewer errors than a proficient human expert. Even though no technology is really 100% reliable, the dangers posed by human complacency diminish practically to zero the moment a system approaches a certain (admittedly very high) threshold of reliability. How many sophisticated systems actually reach this threshold is another question. For example, as at the time of writing, autonomous vehicles do not approach this level of capability (Banks et al. 2018a, b) , but many subcomponents within standard (nonautonomous) vehicles clearly do, such as automatic transmission, automatic light control and first generation cruise control (Walker et al. 2015) . 1 In more typical decision support settings, arguably diagnostic and case prediction software are approaching this better-than-human standard. There are at present AI systems which can spot early-onset Alzheimer's disease from control patients with over 80% accuracy up to a decade before the first appearance of symptoms, a feat vastly outperforming the ablest human pathologist attempting anything similar (Amoroso et al. 2017 ). 2 In the legal sphere, advances in natural language processing and machine learning have facilitated the development of case prediction software that can predict, with an average 79% accuracy, the outcomes of cases before the European Court of Human Rights when fed the facts of the cases alone (Aletras et al. 2016) . Most impressively, a similar system had better luck in predicting the rulings of the US Supreme Court than a group of 83 legal experts, of whom almost half had previously served as the justices' law clerks (60% vs. 75% accuracy) (Brynjolfsson and McAfee 2017) . If the disparity between the performance of such systems and that of well-trained and experienced human professionals widens any further, presumably it will not much matter if humans perfunctorily adhere to whatever these systems decide or advise in a particular situation. 3 Algorithmic Decision-Making and the Control Problem Background to the Control Problem A human-machine system (HMS) may be defined as the synthesis of a biological-psychological system and a technological-mechanical system characterized by functional interdependence (Johannsen 1982) . The object of any HMS is to provide a "function, product or service as an output with reasonable costs, even under conditions of disturbances influencing man, machine or both" (Johannsen 1982, p. xiii) . Importantly, it is has long been recognized as ideal for the human element in this system to be satisfactorily absorbed in the role being played-to reach an adequate level of job satisfaction (Moray 1979 )-even if this conflicts with the overall aims of the HMS (e.g. in providing a service at reasonable cost). HMSs were first investigated in relation to predominantly manual control tasks, initially in aircraft piloting, but then later in ship steering, car driving and industrial process control (see Kelley 1968; Edwards and Lees 1974; Sheridan and Ferrell 1974 for early reviews). This research continues today under the branch of psychology known as "human factors". Human factors research draws on various strands of inquiry, including sociology, physiology, control theory, systems engineering and cognitive science, the latter a branch of (cognitive) psychology that investigates mental processes through the use of models inspired by computer science (Newell and Simon 1972; Rouse 1982) . Apart from providing models of cognitive processes, computers have been a common denominator in practically all HMSs from the time they were first studied, with human-computer interaction (HCI) a key focus from the start (Pazouki et al. 2018 ). Here the standard topics have concerned optimal task allocation, interface design and software ergonomics generally (Rouse 1981; Hatvany and Guedj 1982; Williges and Williges 1982) . Computer-aided decision-making, which is our concern here, can therefore be considered a special branch of HCI and human factors research. When we talk about "control" of HMSs, we are using the term in a broad sensebroader than the sense typically understood in control theory and human factorsencompassing tasks which have traditionally been regarded, strictly speaking, as distinct from control, such as problem-solving (see e.g. Johannsen 1982 ). Thus by "control" we understand both fault diagnosis and management (solving problems as they occur in real time, with a view to restoring normal operation) as well as planning (anticipating future problems and devising appropriate strategies to combat them). The control problem was arguably first identified in papers by Wickens and Kessel (1979) and Wiener and Curry (1980) , but it did not receive its definitive and celebrated formulation until Lisanne Bainbridge (1983) paper came along with the succinctly telling title: "Ironies of Automation". The chief irony with which her paper grappled is "that the more advanced a control system is, so the more crucial may be the contribution of the human operator" (1983, p. 775) . Although writing at a time before deep learning had anything to do with algorithmically automated decision tasks, what she had to say about the role of the human monitor in a HCI is as salient today as when the paper first appeared: if the decisions can be fully specified then a computer can make them more quickly, taking into account more dimensions and using more accurately speci-fied criteria than a human operator can. There is therefore no way in which the human operator can check in real-time that the computer is following its rules correctly. One can therefore only expect the operator to monitor the computer's decisions at some meta-level, to decide whether the computer's decisions are "acceptable". (1983, p. 776, emphasis added) As we see things, this residual monitoring function of the human operator generates at least four kinds of difficulties that should be treated separately. The first relates to the cognitive limits of human processing power (the "capacity problem"). Its statement in Bainbridge followed directly on from the italicized portion of the preceding quote: if the computer is being used to make the decisions because human judgement and intuitive reasoning are not adequate in this context, then which of the decisions is to be accepted? The human monitor has been given an impossible task. (1983, p. 776) Humans are often at a severe epistemic disadvantage vis-à-vis the systems they are tasked with supervising. This can be seen very clearly in the case of high frequency financial trading. It is impossible for a monitor to keep abreast of what is happening in real time because the trades occur at speeds that simply exceed the abilities of human monitors to keep track. As Baxter et al. (2012, p. 68) point out, "[i] n the time it takes to diagnose and repair [a] failure…many more trades may have been executed, and possibly have exploited that failure". Analogous problems arise in aviation with respect to the use of autopilot systems (Baxter et al. 2012) . Cebon (2015, p. 10) notes that autopilot systems are becoming "…so sophisticated that they only fail in complex 'edge cases' that are impossible for the designers to foresee. Consequently pilots cannot be trained to handle them". Along with the attentional problem (see next paragraph), cognitive constraints account for the main difficulties operators experience in reasserting control of a HMS when the system malfunctions. While one may question whether the opacity of a system impedes its intelligibility quite as much as the capacity problem seems to imply (see e.g. Zerilli et al. 2018) , the capacity problem presents a formidable HCI challenge regardless. Today, even a fully proficient software technician would be loath to understand the multi-vector logic governing the generation of a neural network's outputs. The second difficulty relates to the attentional limits of human performance (the "attentional problem"): We know from many "vigilance" studies…that it is impossible for even a highly motivated human being to maintain effective visual attention towards a source of information on which very little happens, for more than about half an hour. This means that it is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities…. (Bainbridge 1983, p. 776) Automation has a significant impact on situation awareness (Stanton 2016) . This is perhaps most clearly illustrated in respect of autonomous vehicles. Inattentive drivers operating a vehicle while it is in autonomous mode are less able to anticipate takeover requests and may be ill-prepared to resume control in an emergency 1 3 Algorithmic Decision-Making and the Control Problem (Stanton 2015; Cunningham and Regan 2018; Banks et al. 2018a, b) . Instantaneously transitioning from low to high workload poses great difficulties for most people (Walker et al. 2015) . The third difficulty relates to the attitudes of human operators in the face of sophisticated technology (the "attitudinal problem"). Except for a few brief remarks (Bainbridge 1983 p. 776) , this problem was not really addressed in Bainbridge's paper (cf. Wiener and Curry 1980) . It has, however, been the subject of active research in the years since (see e.g. Skitka et al. 2000; Parasuraman and Manzey 2010; Pazouki et al. 2018) . Here the conundrum is that as the quality of automation improves, and the human operator's role becomes progressively less demanding, the operator "starts to assume that the system is infallible, and so will no longer actively monitor what is happening, meaning they have become complacent…[T]he operator assumes that the system is reliable and therefore failure detection deteriorates" (Pazouki et al. 2018, p. 299) . There is some evidence that complacency is worse under conditions of multiple task load, "when manual tasks compete with the automated task for the operator's attention" (Parasuraman and Manzey 2010, p. 387 ). In other words, "[t]he operator's attention allocation strategy appears to favor his or her manual tasks as opposed to the automated task" (Parasuraman and Manzey 2010, pp. 387-388) . While this makes it sound as if complacency is an attentional issue, in truth it is an attitudinal one because (so it seems) the monitor only risks being "distracted" by other tasks when they believe the system is reliable enough to be left alone. When the system is not regarded as reliable in the first place, the effect does not occur. (As we discuss later, the control problem arises only from the use of highly reliable but imperfect systems: it does not arise from the use of less reliable systems). This very likely explains why the effect is reversed when the operator monitors a less reliable system, which predictably elicits higher vigilance (Parasuraman and Manzey 2010; Banks et al. 2018a, b) . Related to automation complacency is automation bias, occurring when human operators "trust the automated system so much that they ignore other sources of information, including their own senses" (Pazouki et al. 2018, p. 299) . Both complacency and bias "describe a conscious or unconscious response of the human operator induced by overtrust in the proper function of an automated system" (Parasuraman and Manzey 2010, p. 406) . The fourth and final difficulty relates to the currency of human skills (the "currency problem"): Unfortunately, physical skills deteriorate when they are not used….This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one….[With regard to cognitive skills] efficient retrieval of [process] knowledge from long-term memory depends on frequency of use….[T]his type of knowledge develops only through use and feedback about its effectiveness. People given this knowledge in theoretical classroom instruction without appropriate practical exercises will probably not understand much of it, as it will not be within a framework which makes it meaningful, and they will not remember much of it as it will not be associated with retrieval strategies which are integrated with the rest of the task. (Bainbridge 1983, pp. 775-776) These four problems may be seen as four distinct but interrelated aspects of the one control problem. For example, the capacity problem can be expected to intensify the human tendency towards overestimating the value of a machine's outputs, thus compounding the attitudinal problem. In this paper we shall follow the trend of most human factors research since 1983 by confining our attention primarily to the attitudinal problem. Although the problems are distinct, their very interrelatedness means that prescriptions regarding one may go some way towards alleviating (some of?) the others (e.g. see our discussion of the value of dynamism in HMSs, in Sect. 5). Locating the Control Problem Within the Landscape of Control-Related Issues The control problem is nestled among a set of (at least) six interrelated questions about HMSs. Identifying the main questions together enables us to situate the control problem within a broader framework of inquiry into human control in HMSs. In this section we say something brief about the first four questions. In the following two sections, we pursue answers to the final two questions at comparatively greater depth. The questions we have identified are as follows: (i) What is meaningful human control of a HMS? (ii) Is human control, so understood, always necessary within a HMS? (iii) Can the role of the human operator be safely reduced to that of monitor alone? (iv) If not, why not?-What is "the control problem"? (v) When, or under what conditions, does the control problem arise? (vi) Can the control problem be solved? Regarding (i), a system for us is under "meaningful human control" when, at a minimum, it behaves the way it should, i.e. in accordance with the wishes of its operators (cf. Santoni de Sio and van den Hoven 2018). For a system to be under meaningful human control, however, also implies that it is under effective control, such that its operators have the wherewithal to correct the system or abort its operations in sufficient time to avert the worst effects of its deviance. This is why we stated earlier that our notion of control extends to fault diagnosis and management (resolving problems with a view to restoring normal operation) as well as planning (devising strategies to deal with contingencies). Regarding (ii), we assume that it is always desirable for a HMS to be under effective human control. Our discussion of the control problem already signals a negative reply to question (iii) whenever high-stakes or safety-critical decisions are involved: humans perform very poorly at prolonged monitoring tasks (Molloy and Parasuraman 1996; Banks et al. 2018a ). Human attentional resources typically "shrink to fit" task demands (Walker et al. 2015) . The attitudinal and attentional effects of the control problem combined are sufficiently detrimental to explain why a HMS that reduces the human controller to a monitor of displays or passive recipient of outputs is necessarily hazardous. However, because the effects of complacency can be reversed by replacing a reliable system with a less reliable system, some purely monitor-based setups do 1 3 Algorithmic Decision-Making and the Control Problem seem to avoid the problem (note: they avoid, rather than solve, the problem). And of course systems that approach better-than-human reliability do not pose a control problem as such (e.g. automatic transmission, case prediction software, etc.). Because we have already defined the control problem (iv), all that remains is for us to elaborate on the answers we have intimated to the final two, most important, questions, i.e. (v) and (vi). We address these in the following two sections. When, or Under What Conditions, does the Control Problem Arise? The conundrum of control is that the more reliable a system becomes, the more difficult it is for a human supervisor to maintain an adequate level of engagement with the technology to ensure safe resumption of manual control should the system malfunction. In relation to current "Level 2" autonomous vehicles (see below)-which allow the driver to be hands-and feet-free but not mind-free, because the driver still has to watch the road-Stanton (2015, p. 9) puts the point vividly: "even the most observant human driver's attention will begin to wane; it will be akin to watching paint dry". This is a manifestation in manual systems of a more general problem of control over automated decision support systems, viz., the tendency to defer to systems that approach (but do not reach) very high reliability and predictability. Conversely, decreases in automation reliability generally seem to increase the detection rate of system failures (Bagheri and Jamieson 2004) . Starkly put, automation is "most dangerous when it behaves in a consistent and reliable manner for most of the time" (Banks et al. 2018b, p. 283) . Decades' worth of research in aviation, shipping, driving and industrial process control supports this assessment. The only options to deal with the predicament therefore appear to be (at least in high-stakes or safetycritical contexts): (a) The use of less reliable systems for tasks whose execution may be more expedient when automated to any standard of proficiency than when not at all, since less reliable systems do not pose the control problem; (b) To implement only partial automation through task decomposition (see Sect. 5); and (c) To wait for a system to reach near perfect (better-than-human) reliability before deployment. Options (a) and (c) are self-explanatory. We shall discuss (b) in greater detail in the following section (as well as a few other strategies, such as "catch trials", in Sect. 7). For now we note that (b) and (c) are not mutually exclusive: any decision task that has automatizable subcomponents may employ near-risk-free technology working alongside an active, purposefully-engaged human operator, with human and machine fully autonomous within their particular spheres of operation. Furthermore, options (a) and (b) are not mutually exclusive either: any decision task that has automatizable subcomponents may likewise employ patently suboptimal technology working alongside an active, purposefully-engaged human operator (although here the machine will obviously not be fully autonomous within its sphere of operation). Because it may seem counterintuitive, we shall justify the inclusion of option (a) (and (a + b)) as part of our menu of options in the following section. Since decision tasks can be cut more or less finely [as option (b), above, assumes], one might wonder whether the control problem presents quite the same challenge when the situation involves the automation of a few subcomponents of a decision, in contrast to the automation of an entire decision. We could assume, for instance, that border control is one big decision-i.e. whether to admit, or not admit, persons moving between state boundaries-involving customs clearance, passport verification, drug detection, and so on. When the entire process is automated by one large, distributed border control software package, and this system works reliably well most of the time-but still requires human monitors to invigilate display panels and the like-is the control problem any worse than it would be when some automatizable subcomponents of the overall decision are carved out for discrete automation (and automated, once again, by mostly, but not completely, reliable systems)? Currently, of course, border control decisions are only partially automated in this sense: SmartGate allows for fully automated electronic passport control checks, but customs officials and sniffer dogs still litter most immigration checkpoints, and their job is to handle such parts of the overall decision task as cannot be effectively automated. The same issue arises in automotive engineering. The Society of Automotive Engineers (SAE) framework, running from Level 0 (no automation) to Level 5 (full automation), classifies vehicles in accordance with the degree of system functions that have been carved out for automation (SAE J3016 2016). Tesla Autopilot and Mercedes Distronic Plus (Level 2) require the driver to monitor what is going on throughout the whole journey, while Google's self-driving car does everything except turn itself on and off. The thought is that if an actively engaged human is retained somewhere in the control loop, contributing purposefully to the decision task, the human may be less susceptible to automation complacency and bias than when their only role is monitoring. But actually the research we cited earlier suggests that matters are not quite so simple. Complacency appears to be worse under conditions of multiple task load so long as the automated subcomponent, being reliable most of the time, engenders misplaced trust in its ability to be left alone. Crucially, this effect can be reversed if the system is replaced with one that does not engender the same degree of confidence-i.e. a less reliable system (Parasuraman and Manzey 2010, pp. 387-388) . These findings indicate that the control problem is not necessarily affected by the size or share of the automated subcomponent relative to the whole decision procedure. As long as the autonomous system in question is more reliable than not, the control problem rears its head, with the only options available for remediating its effects being the three outlined above. On the other hand, this is not to deny that differences between partial and full automation may extend to differences in how human operators typically perceive the respective capabilities of these systems. There is evidence that operator trust is positively related to the scale and complexity of an autonomous system. For instance, in low-level partially automated systems, such as SAE Level 1 autonomous vehicles, there is "a clear partition in task allocation between the driver and vehicle 1 3 Algorithmic Decision-Making and the Control Problem subsystems" (Banks et al. 2018b, p. 283 ). As the level of automation increases, however, this allocation gets blurred to the point that drivers find it difficult forming accurate assessments of the vehicle's capabilities, and on the whole are inclined to overestimate them (Banks et al. 2018b) . Counteracting this effect may be the greater readiness to believe that a smaller, less sophisticated device-with fewer working parts and opportunities for system glitches-will be compensatingly less temperamental. In fact we suspect that this presumption is probably justified in the case of decision subsystems in SAE Level 0 vehicles such as automatic transmission, automatic light control and first generation cruise control (Walker et al. 2015) . These subsystems may be so reliable, approaching near perfect (better-than-human) dependability, as to effectively neutralize the control problem's sting in most cases. So in the end, it seems, larger and more sophisticated technologies that are mostly reliable probably do pose a greater control problem than smaller ones. Can the Control Problem be Solved? If the question is interpreted literally, the answer to "Can the control problem be solved?" appears to be straightforwardly negative: the control problem cannot literally be solved. There is nothing we can do which directly targets, still less directly alleviates, the human tendency to fall into automation complacency and bias once an autonomous system operates reliably most of the time, and when the only role left for the operator is to monitor its largely seamless transactions. However, by accepting this tendency as an obstinate feature of HMSs, we may be able to work around it without pretending we can alter constraints imposed by millions of years of evolution. The insights of human factors research is instructive here. There is no reason to suspect that machine learning and other state-of-the-art decision support systems are less likely to induce complacency effects than other forms of automated effort (e.g. Cummings 2004; Edwards and Veale 2017, p. 51) . With this in mind, one important human factors recommendation is to foster mutual accommodation between human and computer competencies through a dynamic and complementary allocation of functions that optimally preserves attentional resources (Stanton and Marsden 1996) . Ideally, only those parts of a decision should be automated that leave the human operator with something vital and absorbing to do (Bainbridge 1983) . Optimal workload, moreover, would prevent demand explosion in scenarios where the operator must intervene quickly to rectify a situation-effective intervention can be enormously difficult when the operator has to shift from low to high level cognitive effort within a very short window (Walker et al. 2015) . Let us call this the "dynamic/complementary allocation of function" (DCAF) approach. DCAF assumes that human performance can be enhanced when automation augments rather than replaces human skills. It need not, however, assume that augmentation is always preferable to replacement. In fact, for the DCAF approach to work, some systems clearly need to replace the human agent and be left to operate autonomously. Human-machine decision systems that contain automated subcomponents work best when the human operator is allowed to concentrate their energies on the chunks of the task better suited to human rather than autonomous execution-a setup which only avoids the control problem if the automated subroutines are handled by systems approaching near-perfect (better-than-human) dependability. Otherwise the autonomous parts might work very well for the most part but still require a human monitor-and it is clear where this path leads. Indeed one of the advantages of complementarity is precisely that, by carving up a big decision into smaller and smaller chunks, the more likely it will be that better-than-human systems can be found to handle them. Complementarity between human and computer is crucial. In a passage worth citing in full, Pohl (2008, p. 73) notes that …intelligent software systems can be particularly helpful in complementing human capabilities by providing a tireless, fast and emotionless problem analysis and solution evaluation capability. Large volumes of information and multifaceted decision contexts tend to easily overwhelm human decision-makers. When such an overload occurs we tend to switch from an analysis mode to an intuitive mode in which we have to rely almost entirely on our ability to develop situation awareness through abstraction and conceptualization. While this is perhaps our greatest strength it is also potentially our greatest weakness, because at this intuitive meta-level we become increasingly vulnerable to emotional influences. The capabilities of the computer are strongest in the areas of parallelism, speed and accuracy. Whereas the human being tends to limit the amount of detailed knowledge by continuously abstracting information to a higher level of understanding, the computer excels in its almost unlimited capacity for storing data. While the human being is prone to impatience, loss of concentration and panic under overwhelming or threatening circumstances, the computer is totally oblivious to such emotional influences. The most effective implementation of these complementing human and machine capabilities is in a tightly coupled partnership environment that encourages and supports seamless interaction. (emphasis added) At the same time, DCAF emphasises that the allocation of functions in a HMS should be flexible enough to support dynamic interaction, with hand-over and handback for shared competencies (as occurs when a driver disengages cruise control and thereby resumes control of acceleration). Of course dynamism will be dangerous if there is hand-over between agents that are ill-matched in their competencies. Dynamism will only work in circumstances where the human and machine are nearly equivalently proficient (with the machine perhaps only marginally better than the human). Apart from anything else, dynamic interaction allows the human to maintain their manual control skills, and so may go some way towards alleviating the currency problem. It should be clear that DCAF falls under option (b) in our three-item menu of options for managing the control problem. It should also be clear that, as we presaged in Sect. 4 (and explained in the preceding paragraph), DCAF (b) will almost always employ near-risk-free technology (c)-so that (b) and (c) are not mutually 1 3 Algorithmic Decision-Making and the Control Problem exclusive. For example, in relation to autonomous vehicles, Stanton (2015, p. 9) describes the DCAF challenge as follows: "We need to design vehicle automation to have graduated and gradual hand-over and hand-back tasks if it is to successfully support human drivers. Vehicle automation needs to work towards providing a chatty co-pilot, not a silent auto-pilot!" For this to be the case, however, the driver cannot worry about whether what the "chatty co-pilot" is saying is true. In the DCAF approach, the human is a co-pilot of sorts, with real work to do, so that the fidelity of their autonomous counterpart/s needs to be assured. What if this fidelity cannot be assured? We have said that as a rule a decision tool should not replace a human agent in a high-stakes/safety-critical setting unless the tool reaches a certain crucial threshold of reliability; and we have applied this recommendation iteratively, carrying it over to DFAC decision contexts in high-stakes/ safety-critical settings such that, for any automatizable subcomponent of a decision procedure, a tool should not replace the human agent responsible for that (sub)decision unless the tool meets our very high (better-than-human) standard. But what if this standard cannot be met? Can less-than-reliable systems be deployed here? In other words, are there exceptions to the general rule we have defended, or special circumstances where the general rule gives way? The short answer is: yes. As we have explained, the control problem does not arise from the use of patently suboptimal automation, only from generally dependable automation. Therefore, depending on the exact nature of the HCI at issue, a less-than-reliable system might safely replace the human agent charged with deciding some matter within a larger decision structure, for example, passport verification within the larger border control decision structure. The use of less-than-reliable systems is of course option (a) in our menu, and as exhibited in this example, nested within a broader decision structure consisting of subcomponents, as predicated by (b) (i.e. (a) and (b) are not mutually exclusive, again as we noted in Sect. 4). But now, why would a patently suboptimal decision tool be deployed here at all (or anywhere else, for that matter)? It is one thing to say that it avoids the control problem. It is quite another to say that a system which is so suboptimal it does not engender human confidence should for that very reason be used to assist human decision-makers. Clearly we owe our readers an explanation for including option (a) within our menu of strategies for dealing with the control problem. We think deployment of suboptimal tools may still prove useful in circumstances where the tools have access to information to which the human does not, or otherwise "decide" things in ways that humans generally cannot. Such systems very literally augment human capacities: human and machine in effect share control. The clearest examples of this form of technology are recidivism risk prediction tools used in law enforcement. Not all such instruments come with the problematic biases of PredPol (used in so-called hot-spot policing) and COMPAS (predicting the likelihood of offender recidivism) built into them. Some may be genuinely useful in reducing crime at the same time as reducing the prison population (see e.g. Kleinberg et al. 2018) . These systems answer questions of the form: How should we distribute police officers over a locality having such and such geographical characteristics? What is the likelihood that this prisoner will reoffend if released on parole? And so on. Consider how these systems decide such matters. Often they use logistic 1 3 regression or more advanced actuarial techniques to mine patterns from very large databases. This is not a feat unaided humans can hope to match. There are also some phenomena within human decision-making that algorithms can help to counteracte.g. decision fatigue and decision inertia, of which some of the classic studies actually involve judges' parole decisions (e.g. Danziger et al. 2011) . Be that as it may, however, we would still urge that great caution be exercised before any form of suboptimal automation is used in high-stakes/safety-critical settings. Many of these systems (like COMPAS) are after all tools which have attained notoriety for their problematic biases and inherent technical limitations (e.g. Blomberg et al. 2010; Larson et al. 2016; Dressel and Farid 2018) . And as some of these systems gradually begin to overcome their limitations, our worry is that the control problem will gradually re-emerge, taking human operators unawares and decision subjects along with them. It will be all too easy for a judge with decision fatigue, for example, to simply rely on what a predictive risk instrument "objectively" recommends. It may turn out that guidelines recommending, for example, that decisionmakers consult their own judgment first before consulting an algorithm, using the algorithm merely as a check on their intuitions, could assist in offsetting some of the effects of automation complacency and bias. (Note that this would come close to telling decision-makers not to use the algorithm-hardly a "solution" to the problem). The Wisconsin Supreme Court, when discussing protocols around judicial use of the COMPAS recidivism algorithm required that sentencing judges be given a list of warnings about the tool as a condition of relying on its predictions. More empirical research is required to see whether such approaches really do work. To conclude our analysis of option (a), some of these features of suboptimal decision systems should be stated more explicitly. Option (a)-whether or not combined with option (b)-always represents a specific type of HCI. It is noteworthy that whenever a patently suboptimal tool is used, the human agent does not readily stumble into the control problem, and may therefore be assured of an active and meaningful role working alongside it (assuming a certain level of diligence, aptitude and motivation on the part of the human). But the interaction here will not quite be the same as that envisaged for HMSs under the DCAF approach. Under DCAF, the allocation of functions is complementary. This will not generally be the case under option (a). Assume that a decision, D, comprises the (sub)decisions d 0 , d 1 , d 2 and d 3 . Under DCAF, the human will take care of (let us say) d 0 , d 1 and d 2 while the system will take care of d 3 . This means the human can concentrate their energies entirely on d 0 , d 1 and d 2 and essentially ignore d 3 , because the system is superior in making decisions of the type d 3 . This cannot happen when suboptimal software is used. In such cases, it is not as if the human can take care of d 0 , d 1 and d 2 while the system takes care of d 3 ; the human must also take care of d 3 , albeit with the assistance of a suboptimal decision tool. Both human and machine participate in d 3 (i.e. human and machine effectively share control). Indeed, in light of what we said earlier, in many cases both human and machine may aptly be described as doing (or deciding) the same thing in different ways: for the system may be trying to answer the same question as the human (will she reoffend if released on parole? etc.), only with access to information which the human does not have, or in ways (or at speeds) which humans are unable to match. So there need be no complementarity of functions here, 1 3 Algorithmic Decision-Making and the Control Problem as both human and machine may be performing the same function (viz., d 3 ), albeit distinctly-a classic case of "multiple realization" (Zerilli 2017) . We might even say that instead of a complementary coupling of skills between human and computer, what we have under option (a) will often be a supplementary coupling-where each agent adds something unique to the decision problem in point of how the agent goes about resolving it (somewhat analogous to the role of an expert witness who "assists" the judge in determining the appropriate sentence for an offender). Table 1 summarizes the logical space of decisions that may be automated, in whole or in part, and our recommendations for whether or not they should be. Figure 1 depicts both the presence and danger of complacency as a function of system reliability. Notice that at a certain point of reliability, the presence of complacency no longer matters. Notice also what the figure does not capture: it does not tell us what over-reliance on an algorithm looks like, or otherwise calibrate for a healthy or sceptical level of dependence on an algorithm. 3 This-which could well be called the measurement problem-is the subject of separate research in human factors, although it is probably fair to say that most design interventions and optimal task allocation and HCI paradigms are about preventing over-reliance rather than detecting it, no doubt (at least partly) because the hallmarks of over-reliance will differ from system to system and context to context (see e.g. Endsley 2017). System Reliability The DCAF approach might be thought of as a variant of the "Privacy by Design" (PbD) approach-something like "Control by Design". PbD proponents want data protection principles taken into account throughout the whole lifecycle of information systems development (Bygrave 2017 ). Analogously, one could say that the DCAF approach seeks to "hardwire" human factors considerations into the development of HMSs from the earliest stages of modeling. For algorithmic decision support tools, we would suggest that the following six principles can serve as a framework both for assessing the viability of any such human-machine system as well as guiding their design and implementation: • Division of labour Decisions with automatizable subcomponents should reflect a clear allocation of responsibilities between the human-and computer-operated parts of the decision. • Complementarity The allocation of responsibilities should proceed in such a way that those subcomponents better suited for human handling are not automated, and those better suited for computer handling are not manually controlled. While an unhelpful aversion to algorithms can be reduced by giving users power to adjust a decision system's outputs (Dietvorst et al. 2016) , human interference also tends to introduce errors (Fildes et al. 2009) . Humans should stick to what they do best, such as communication, symbolic reasoning, conceptualiza-1 3 tion, empathy and intuition-provided these are the skills actually required for a particular decision (Pohl 2008) . Intuition may be excellent in some decision contexts (who will be the best actor for this part?) but the last resort of a scoundrel in other contexts (what is the likelihood that this brown-skinned man with a history of mental health issues and no priors will re-offend?). A related point is that what may count as a virtue in a human may well count as a vice in a machine. "Graceful degradation" has long been regarded as one of the advantages of humans over machines (Fitts 1951 ), but as Bainbridge (1983, p. 777 ) pointed out, "[t]his is not an aspect of human performance to be aimed for in computers…automatic systems should fail obviously." 4 In short, complementarity means humans and machines have clearly defined and clearly separated roles, where the human is effectively barred from interfering with the machine's This assumes that some decisions can be safely handled by both humans and computers, i.e. where humans and computers have shared competencies within particular subdomains. Hand-over and hand-back may also go some way towards alleviating the currency problem, as operators are thereby afforded an opportunity to practice and maintain their manual control skills (Bainbridge 1983 ). • Co-evolution User requirements co-evolve over time, and decision support tools should reflect this. Decision support tools within HMSs should be designed for adaptability and change. This means designers should not over-specify how a system will work, but allow its users to tailor the system so that it best meets their particular needs (Walker et al. 2015, p. 201 ). • Pragmatism Decision support tools "should be congruent with existing practices which may on occasion appear archaic compared to what technology now offers" (Walker et al. 2015, p. 201) . When cell phones first appeared, people did not throw out their telephone directories and address books in short order. The older technology held on for a while longer until mobiles were subsumed into the Internet of Things. As new decision software gets tested and then rolled out, we think the same approach is advisable. • Context-sensitivity Each decision tool, situated within its own unique decision context, may prioritise these principles and negotiate their various trade-offs differently. We now turn to consider an algorithmic decision tool that exemplifies several of these principles in its design. Since 1974, New Zealand has run a universal no-fault accident compensation scheme for personal injury. Because the scheme is compulsory, all citizens/residents (or temporary visitors) who have suffered personal injury can expect to be covered regardless of whether or not another party is at fault. Most of the ACC's payments cover treatment/medical costs, but they also regularly cover lost earnings and home and/or vehicle modifications for those with more serious injuries. The ACC processes around 2 million claims per year, of which (on average) about 96% are accepted (ACC 2018). Over the years, the ACC has largely relied on manual control for processing claims. In the past this has involved ACC staff members sorting through and assessing individual claims one by one. Even with improvements to case handling procedures over the years, such as technology allowing electronic submission, all claims have required some degree of manual processing (ACC 2018). At the time of writing, the ACC plans to introduce an improved claim registration and cover assessment process by the end of 2018. It aims to make the claims approval process quicker and more efficient, removing the need for manual control in standard cases altogether. The ACC hopes that by harnessing the power of big data-12 million claims submitted between 2010 and 2016-it can both reduce the wait time for approvals as well as more efficiently distribute the more complex claims to ACC teams for final determination. There are two key features of the new claims handling process. First, the ACC's machine-learning algorithm (developed in-house) has been designed to identify such characteristics of a claim as are strictly relevant to whether it can be accepted. Thus, "[s]imple claims-where the information provided shows that an injury was caused by an accident-will be fast-tracked and immediately accepted" (ACC 2018). For example, the system would fast-track "someone going to the emergency clinic to have a cut stitched," but not someone presenting with "multiple severe injuries" (ACC 2018). Claims that are not accepted at this step will be passed along to ACC teams for manual processing. Second, the system is not able to decline claims-it can only accept claims that show up as straightforwardly involving accidents based on information provided by the claimant. In giving the tool this limited jurisdiction, its designers have ingeniously converted what is potentially a high-stakes decision into an extremely low one: for the tool cannot decide adversely, only inclusively. The process runs as follows. Each claim moves through a series of automated system checks. At each "checkpoint," one of two things can happen: the claim passes, or it does not. If it passes, it proceeds to the next checkpoint. If it does not pass, the system flags it for manual processing. There are three checkpoints: one for validation and eligibility; a second for accident description; and a third for cover decision (final determination). At the validation and eligibility checkpoint, the system checks that the claimant has provided all essential information (e.g. location and date of accident, type of injury, healthcare provider, claimant's 1 3 Algorithmic Decision-Making and the Control Problem employment and residency status, etc.). If the information is complete, the system passes the claim on to the accident description checkpoint. If the information is incomplete, the system flags it for manual processing. At the accident description checkpoint, the system attempts to categorize the claim in accordance with a predetermined taxonomy of claim types ("fall," "rugby accident," etc.). The system searches the claim form for words that correlate with recognized claim types. If the claim can be categorized in this way, it is passed on to the final checkpoint for determination. If it cannot be categorized, it is flagged for manual processing. At the final checkpoint, two statistical models are employed: The "Probability of Accept" model is informed by a statistical model that uses data from 12 million previous, anonymised claims to calculate the probability a new claim should be approved. Each claim is then given a score, and ACC sets a threshold for scores that will be automatically accepted or not. A claim that scores above the threshold set by ACC will be automatically accepted for cover. A claim that scores below the auto-acceptance threshold would then be run through the "Complexity" model, which categorises the claim on a scale of low-complexity through to high-complexity. Each claim is then given a complexity score, and ACC sets a threshold for complexity scores that will be automatically accepted or not. A claim that scores below the threshold set by ACC will be automatically accepted. A claim that scores above the auto-acceptance threshold for complexity would then be referred for further manual processing by an ACC staff member. Example: If a client submits a claim for treatment relating to multiple severe injuries and post-traumatic stress following a motor vehicle accident, their claim is likely to receive a high complexity score and would be referred for handling by ACC teams. (ACC 2018) How well does the ACC tool incorporate the six human factors principles we outlined in the previous section? We think it performs commendably on at least three of our stated principles: • Division of labour There is here a clear allocation of responsibilities between the human-and computer-operated parts of the decision regarding whether to approve claims. • Complementarity The allocation of responsibilities proceeds in such a way that those subcomponents of the decision better suited for human handling are not automated, and those better suited for computer handling are not manually controlled. Human controllers do not interfere with the automated parts of the process, which are therefore left to operate as essentially risk-free zones of automated decision-making. Humans are not able to perform such aspects of the "accident description" and "cover decision" subcomponents of the decision as are handled by the algorithm either as accurately or as quickly as the algorithm. In the ACC model, humans only intervene at the point where the algorithm reaches the limit of its competence to determine a claim. • Context-sensitivity The allocation of responsibilities does not incorporate handover and hand-back protocols (i.e. it is not dynamic), but in this particular setup, such flexibility would not contribute to optimal performance. This means that the ACC tool has been deployed in a context-sensitive manner, dispensing with or trading off against principles that are not especially salient in the specific circumstances of deployment. 7 Are There Other Ways to Address the Control Problem? Earlier we mentioned that the control problem appears to be resistant to training, and that experts are as prone to complacency as novices. Before concluding, we should say a little about some of the other commonly suggested strategies for overcoming the control problem. First, there is some evidence that increasing accountability mechanisms can have a positive effect on human operators whose primary responsibility is monitoring an autonomous system. In an important study, Skitka et al. (2000, p. 701) found that "making participants accountable for either their overall performance or their decision accuracy led to lower rates of automation bias". This seems to imply that if the threat of random checks and audits were held over monitors, the tendency to distrust one's own senses might be attenuated. What effects these checks could have on other aspects of human performance and job satisfaction is a separate question, as is the question of how accountability mechanisms affect complacency (as opposed to bias). Parasuraman and Manzey (2010, p. 396 ) also warn that the results of this study are not conclusive. But clearly auditing protocols, and perhaps more creative accountability measures, such as "catch-trials"-in which system errors are deliberately generated to keep human invigilators on their toes-could be quite useful in counteracting automation bias. Catch trials look particularly promising. While more research on their efficacy is needed before concrete guidelines can be promulgated recommending their adoption, they do seem to offer a viable means of getting humans more actively engaged in the control task. 5 But in any case, much like other touted solutions to the control problem, they do not offer a literal solution: rather, they render systems that are mostly dependable (but not better-than-human) less reliable by stealth (as it were), capitalizing on the premise that less reliable systems do not induce the same complacency and bias that attend more reliable systems. Thus catch trials really fall under option (a) in our menu of strategies above. Second, might having a group of humans in the loop, working together and able to keep watch on one another, alleviate automation bias? Apparently not: 1 3 Algorithmic Decision-Making and the Control Problem Sharing monitoring and decision-making tasks with an automated aid may lead to the same psychological effects that occur when humans share tasks with other humans, whereby "social loafing" can occur-reflected in the tendency of humans to reduce their own effort when working redundantly within a group than when they work individually on a given task….Similar effects occur when two operators share the responsibility for a monitoring task with automation…. (Parasuraman and Manzey 2010, p. 392) Finally we should note that Parasuraman and Manzey (2010, p. 387 ) added these observations in connection with the effects of training regimes: Although extended practice does not eliminate automation complacency, other training procedures may provide some benefit. In particular, given that complacency is primarily found in multitasking environments and represents attention allocation away from the automated task, training in attention strategies might mitigate complacency. We are not aware of research empirically substantiating this latter claim, but would anyway caution against task allocations that reduce human agents to monitors of largely static displays of information (if that is the implicit proposal here). Other studies cited by the authors, albeit in the context of automation bias, indicate that even explicit briefings about risk factors in HCI do not mitigate the strength of automation bias, and this seems to be true in both multitasking and singletasking environments. An idle mind is not an empty mind, but rather a wandering mind, and when the stakes are high, the risk of complacency is still too great to be managed by rubber-stamp training interventions that make only a questionable difference to deep-seated psychological propensities. Perhaps there is then reason to be sceptical about the effectiveness of "warnings" or other guidelines about how best to use decision tools in judicial settings (as we mentioned earlier in regards to COMPAS). But it is a live research question. Conclusion Automation introduces more than just automated parts: it very often also transforms the nature of the interaction between human and machine in profound ways. One of its most alarming effects is to induce a sense of complacency in its human controllers. To date, little has been said about whether and to what extent the same problem arises from the use of ever more sophisticated algorithmic decision tools, including those exploiting cutting-edge machine learning techniques such as deep learning. We have endeavoured to show how insights from human factors research have great relevance to policy specialists working in AI regulation and policy. Among the factors which should be considered in the decision to automate any part of an administrative or business decision is the tendency of human operators to hand over effective control to an algorithm just because it works well in most instances. We argue that, except in special cases, whenever an automatizable decision is high-stakes or safety-critical, the decision support tool under consideration should not be deployed unless it has genuinely earned its keep. Failing that, a dynamic and complementary allocation of functions between actively engaged human operators and simpler but more nearly perfectly reliable autonomous systems should be considered the safest course. outputs and perhaps in many cases even knowing what the outputs are. Breaking the task up in this way also increases the chances of finding optimally reliable software to handle the automated parts of a decision.• Dynamism The allocation of responsibilities should incorporate hand-over and hand-back protocols where this flexibility contributes to optimal performance. Fig. 1 1 Fig. 1 Presence (i) and (ii) danger of complacency as a function of system reliability. The dashed line represents near perfect (better-than-human) reliability Table 1 1 The logical space of automatizable decisions and their attendant control problems Algorithmic Decision-Making and the Control Problem For any decision D: { d 0 , d 1 , d 2 … d n } Option (a) Option (b) Option (c) Patently suboptimal system Decomposition/allocation Better-than-human system D d0, d1, d2 … dn D (whole decisions) (semiautonomous decisions) (whole decisions) Supplementary coupling/ Complementary coupling/ No human in the loop human in the loop human in the loop e.g. SAE Level 5 AV e.g. medical diagnostic tools (b + c) (eventually); Proceed with caution (DCAF) case prediction sofware e.g. ACC (below) Recommended Recommended OR Supplementary coupling/ human in the loop (a + b) e.g. recidivism risk prediction instruments Proceed with caution OR Mostly reliable system/ human in the loop e.g. SAE Level 2 AV Danger! Control problem! Notice, incidentally, that it is therefore not just when a decision tool is architecturally opaque that a human operator should potentially be retained in the decision control loop, as is already widely appreciated (e.g. IEEE 2017; House of Lords 2018). Even a fully technically transparent decision system may enjoin human agency to a greater or lesser extent.2 The AI system in this case computed measures of structural brain connectivity from fMRI brain scans, and used these as inputs to a classifier. Accuracy was computed on unseen brain scans; the classifier's performance was significant at the p < 0.001 level. More generally, assessing the accuracy of machine learning systems can be a complex task, but suffice it to say that more meaningful accuracy rates would need to consider base rates, which are not always cited in the relevant studies. It is well known that when a base rate is lower than the false positive rate of a test, false positives will exceed true positives even for an extremely accurate test (the so-called "false positive paradox"). Stepping back a little, however, because our point here is just that in some highly formulaic and process-driven domains it appears that machines perform better than humans, the underlying base rates will not be strictly relevant. So long as machines perform better than humans in these domains (as our examples illustrate), these results should hold regardless of base rates. We owe this point to an anonymous reviewer of the journal. This is not always true. An object classifier should tell you how much a given object is like a chair, rather than move discretely from "chair" to some other category of objects: to that extent, classifiers certainly are designed with a view to graceful degradation. Catch trials might not be such a good thing if case workers came to think that "odd" looking cases were "probably not real". Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
18824ae8-f8bd-4e17-8c60-313b517c8acb
trentmkelly/LessWrong-43k
LessWrong
Too Smart for My Own Good Originally posted at sandymaguire.me I want to share a piece of ridiculously obvious advice today. I've got a bad habit, which is being too smart for my own good. Which is to say, when I want to learn something new, too often I spend my time making tools to help me learn, rather than just learning the thing. Take, for example, the first time I tried to learn how to play jazz music. There's only one thing that I'm really good at, which is programming. The central tenet in programming is that "laziness is good," and if you're faced with doing something boring and repetitive, you should instead automate that thing away. When all you have is a hammer... According to The Book, the first thing to do to learn jazz is to learn your scales---in every mode for every key for several varieties of harmony. There are 12 notes, and seven modes, and at least four harmonies. That's what, like 336 different scales to learn? "WHO HAS TIME FOR ALL THAT CRAP," I thought. "I'LL JUST WRITE A COMPUTER PROGRAM TO GENERATE THE SCALES FOR ME, AND THEN PLAY THOSE." In retrospect, this was a terrible plan. Not only did it not get me closer to my goal of knowing how to play jazz music, I also didn't know enough about the domain to successfully model it. It's funny to read back through that blog post with the benefit of hindsight, but at the time I really thought I was onto something! That's not to say it was wasted effort nor that it was useless, merely that it wasn't actually moving me closer to my stated goal of being able to play jazz music. It was scratching my itch for mental masturbation, and was a good exercise in attempting to model things I don't understand very well, but crucially, it wasn't helping. Or take another example, a more recent foray into music for me---only a few weeks ago. This time I had more of a plan; I was taking piano lessons and getting advice on how to practice from my teacher. One of the things he suggested I do was to solo around in the minor pentatonic
eacea197-a7b4-440f-9640-37e2fba7a880
trentmkelly/LessWrong-43k
LessWrong
What are you looking for in a Less Wrong post? My model of "What LW as a community considers a good post" is not accurate enough for my taste. Sometimes I don't understand the karma/number of comments (or lack thereof) of some posts, and it seems rude or arrogant to ask (whether it's someone else's post or one of mine). Among other things, this might help me decide whether to spend the hours necessary to write a post, or if it's not the kind of ideas people are interested in here. So I'm asking you, yes you, what makes a LW post that you want to read. That you want to upvote or strong upvote. That you might comment. A good post, basically.
ee73eb4e-e9d0-46f5-af28-c009f928cc26
trentmkelly/LessWrong-43k
LessWrong
It's important to know when to stop: Mechanistic Exploration of Gemma 2 List Generation TL;DR   * Small Language Models are getting better an at an accelerated pace, enabling the study of behaviors that just a few months ago were only observed in SOTA models. This, paired with the release of the suite of Sparse Autoencoders  "Gemma Scope" by Google Deep Mind, makes this kind of research possible. * Motivated by the lack of non-toy tasks explored in the Mechanistic Interpretability literature, I explore the mechanisms behind list generation. * The decoding process of a Language Model, as a discrete process, presents challenges when it comes to employing metrics for judging key properties of the text, an otherwise widely used technique in MI. To overcome this limitation, I use a proxy metric that measures the tendency of a model to end a list. * Employing this setup and off the shelf attribution techniques, I found several features of crucial importance for the behavior in question. Following this, I used causal ablation to test the hypothesis that those features were responsible for the ending of the list. The procedure consisted in sampling continuations from the model once the k most important nodes were zero ablated.   Preface   This post is a followup, of a 10h research sprint for a failed MATS application, the structure and methods are similar, but the content has been substantially expanded. Some amount of familiarity with Sparse Autoencoders, and Gradient Attribution techniques are expected from the reader.   Why lists don't go on forever? If the reader were to go and ask a Chatbot assistant to provide him with a list of lets say Chocolate cookie brands, they would see how the initial impetus from the model quickly faded as the enumeration cease. ¿But why is that? Leaving aside the fact that models have a maximum context length, and that the number of Chocolate cookie brands is finite (even with hallucinations), the reason why lists generated by a LM eventually end is that lists in the training distribution do so. Hence, it's not d
92fa2f28-0758-4fbd-9b89-c170405c915f
trentmkelly/LessWrong-43k
LessWrong
Acknowledging Rationalist Angst Not all advice can work for everyone, and not all techniques are universally applicable. That being said, here’s a post that would have been incredibly useful for me-three-years-ago to have read, and hopeful it strikes home for some people who read it. I’ll start with drawing your attention to a character archetype that you’ve probably seen before. ******************************* Blake lives in a textbook example of a dystopian totalitarian society. There is a cheery facade of being just and righteous, but secretly The Party does Really Bad Things. Blake has only ever known the world of The Party, and has always taken The Orthodoxy to be pretty much true. Well, sometimes it’s given him some weird feelings that he can’t quite pin down, but surely whatever faults the party may have, at least he doesn’t have to live in the anarchy of The Outside World. You know the story from here. Through a series of unlikely events, the lies of the party are revealed to him and he has to flee for his life. Blake’s entire understanding of the world has been shattered, and he’s alone, scared, and confused. Luckily, he’s picked up by a group of rebels, and is amazed to learn that there is a strong but small Underground Resistance. By now, Blake has had enough time for his fear and confusion to turn into feelings of betrayal and anger. ********************************* I’m willing to go out on a limb and say that the core feelings that Blake was experiencing are the same core feelings I experienced in the beginning of my journey into rationality, and I’d guess that there is a non-negligible subset of people in this community who might have experienced something similar. Okay, yes, the example is a little bit grandiose, but let’s look at the core. (I’ll be talking in terms of my specific journey, and hopefully there’s some crossover for other people’s lives) For me, reading the sequences and “becoming a rationalist” was the tipping point of a slowly forming realization that there was
68f0e39b-2d4a-4a72-a79a-e3c146571411
trentmkelly/LessWrong-43k
LessWrong
Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe I’ve been very heavily involved in the (online) rationalist community for a few months now, and like many others, I have found myself quite freaked out by the apparent despair/lack of hope that seems to be sweeping the community. When people who are smarter than you start getting scared, it seems wise to be concerned as well, even if you don’t fully understand the danger. Nonetheless, it’s important not to get swept up in the crowd. I’ve been trying to get a grasp on why so many seem so hopeless, and these are the assumptions I believe they are making (trivial assumptions included, for completeness; there may be some overlap in this list): 1. AGI is possible to create. 2. AGI will be created within the next century or so, possibly even within the next few years. 3. If AGI is created by people who are not sufficiently educated (aka aware of a solution to the Alignment problem) and cautious, then it will almost certainly be unaligned. 4. Unaligned AGI will try to do something horrible to humans (not out of maliciousness, necessarily, we could just be collateral damage), and will not display sufficiently convergent behavior to have anything resembling our values. 5. We will not be able to effectively stop an unaligned AGI once it is created (due to the Corrigibility problem). 6. We have not yet solved the Alignment problem (of which the Corrigibility problem is merely a subset), and there does not appear to be any likely avenues to success (or at least we should not expect success within the next few decades). 7. Even if we solved the Alignment problem, if a non-aligned AGI arrives on the scene before we can implement ours, we are still doomed (due to first-mover advantage). 8. Our arguments for all of the above are not convincing or compelling enough for most AI researchers to take the threat seriously. 9. As such, unless some drastic action is taken soon, unaligned AGI will be created shortly, and that will be the end of the world as we know it. First of a
40a68eb9-4873-49a2-b39c-5ddfd1ba3609
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Latent Variables and Model Mis-Specification *Posted as part of the AI Alignment Forum sequence on* [*Value Learning*](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc)*.* > **Rohin's note:** So far, we’ve [seen](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr) that ambitious value learning needs to understand human biases, and that we [can't](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ANupXf8XfZo2EJxGv) simply learn the biases in tandem with the reward. Perhaps we could hardcode a specific model of human biases? Such a model is likely to be incomplete and inaccurate, but it will perform better than assuming an optimal human, and as we notice failure modes we can improve the model. In the language of this post by Jacob Steinhardt (original [here](https://jsteinhardt.wordpress.com/2017/01/10/latent-variables-and-model-mis-specification/)), we are using a mis-specified human model. The post talks about why model mis-specification is worse than it may seem at first glance. > > > This post is fairly technical and may not be accessible if you don’t have a background in machine learning. If so, you can skip this post and still understand the rest of the posts in the sequence. However, if you want to do ML-related safety research, I strongly recommend putting in the effort to understand the problems that can arise with mis-specification. > > --- Machine learning is very good at optimizing predictions to match an observed signal — for instance, given a dataset of input images and labels of the images (e.g. dog, cat, etc.), machine learning is very good at correctly predicting the label of a new image. However, performance can quickly break down as soon as we care about criteria other than predicting observables. There are several cases where we might care about such criteria: * In scientific investigations, we often care less about predicting a specific observable phenomenon, and more about what that phenomenon implies about an underlying scientific theory. * In economic analysis, we are most interested in what policies will lead to desirable outcomes. This requires predicting what would counterfactually happen if we were to enact the policy, which we (usually) don’t have any data about. * In machine learning, we may be interested in learning value functions which match human preferences (this is especially important in complex settings where it is hard to specify a satisfactory value function by hand). However, we are unlikely to observe information about the value function directly, and instead must infer it implicitly. For instance, one might infer a value function for autonomous driving by observing the actions of an expert driver. In all of the above scenarios, the primary object of interest — the scientific theory, the effects of a policy, and the value function, respectively — is not part of the observed data. Instead, we can think of it as an unobserved (or “latent”) variable in the model we are using to make predictions. While we might hope that a model that makes good predictions will also place correct values on unobserved variables as well, this need not be the case in general, especially if the model is *mis-specified*. I am interested in latent variable inference because I think it is a potentially important sub-problem for building AI systems that behave safely and are aligned with human values. The connection is most direct for value learning, where the value function is the latent variable of interest and the fidelity with which it is learned directly impacts the well-behavedness of the system. However, one can imagine other uses as well, such as making sure that the concepts that an AI learns sufficiently match the concepts that the human designer had in mind. It will also turn out that latent variable inference is related to *counterfactual reasoning*, which has a large number of tie-ins with building safe AI systems that I will elaborate on in forthcoming posts. The goal of this post is to explain why problems show up if one cares about predicting latent variables rather than observed variables, and to point to a research direction (counterfactual reasoning) that I find promising for addressing these issues. More specifically, in the remainder of this post, I will: (1) give some formal settings where we want to infer unobserved variables and explain why we can run into problems; (2) propose a possible approach to resolving these problems, based on counterfactual reasoning. 1 Identifying Parameters in Regression Problems =============================================== Suppose that we have a regression model pθ(y|x).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , which outputs a probability distribution over y given a value for x. Also suppose we are explicitly interested in identifying the “true” value of θ rather than simply making good predictions about y given x. For instance, we might be interested in whether smoking causes cancer, and so we care not just about predicting whether a given person will get cancer (y) given information about that person (x), but specifically whether the coefficients in θ that correspond to a history of smoking are large and positive. In a typical setting, we are given data points (x1,y1)…(xn,yn) on which to fit a model. Most methods of training machine learning systems optimize predictive performance, i.e. they will output a parameter ^θ that (approximately) maximizes ∑ni=1log pθ(yi,xi). For instance, for a linear regression problem we have log pθ(yi,xi)=−(yi−⟨θ,xi⟩)2. Various more sophisticated methods might employ some form of regularization to reduce overfitting, but they are still fundamentally trying to maximize some measure of predictive accuracy, at least in the limit of infinite data. Call a model **well-specified** if there is some parameter θ∗ for which pθ∗(y,x) matches the true distribution over y, and call a model **mis-specified** if no such θ∗ exists. One can show that for well-specified models, maximizing predictive accuracy works well (modulo a number of technical conditions). In particular, maximizing ∑ni=1log pθ(yi,xi) will (asymptotically, as n→∞) lead to recovering the parameter θ∗. However, if a model is mis-specified**,** then it is not even clear what it means to correctly infer θ. We could declare the θ maximizing predictive accuracy to be the “correct” value of θ, but this has issues: 1. While θ might do a good job of predicting y in the settings we’ve seen, it may not predict y well in very different settings. 2. If we care about determining θ for some scientific purpose, then good predictive accuracy may be an unsuitable metric. For instance, even though margarine consumption might [correlate well with](http://www.tylervigen.com/spurious-correlations) (and hence be a good predictor of) divorce rate, that doesn’t mean that there is a causal relationship between the two. The two problems above also suggest a solution: we will say that we have done a good job of inferring a value for θ if θ can be used to make *good predictions in a wide variety of situations*, and not just the situation we happened to train the model on. (For the latter case of predicting causal relationships, the “wide variety of situations” should include the situation in which the relevant causal intervention is applied.) Note that both of the problems above are different from the typical statistical problem of overfitting. Classically, overfitting occurs when a model is too complex relative to the amount of data at hand, but even if we have a large amount of data the problems above could occur. This is illustrated in the following graph: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/84f63bf3830a7cb5e3023c908c513429dc960a31eb1c95de.png)Here the blue line is the data we have (x,y), and the green line is the model we fit (with slope and intercept parametrized by θ). We have more than enough data to fit a line to it. However, because the true relationship is quadratic, the best linear fit depends heavily on the distribution of the training data. If we had fit to a different part of the quadratic, we would have gotten a potentially very different result. Indeed, in this situation, there is no linear relationship that can do a good job of extrapolating to new situations, unless the domain of those new situations is restricted to the part of the quadratic that we’ve already seen. I will refer to the type of error in the diagram above as *mis-specification error*. Again, mis-specification error is different from error due to overfitting. Overfitting occurs when there is too little data and noise is driving the estimate of the model; in contrast, mis-specification error can occur even if there is plenty of data, and instead occurs because the best-performing model is different in different scenarios. 2 Structural Equation Models ============================ We will next consider a slightly subtler setting, which in economics is referred to as a *structural equation model*. In this setting we again have an output y whose distribution depends on an input x, but now this relationship is mediated by an *unobserved* variable z. A common example is a [discrete choice](https://en.wikipedia.org/wiki/Discrete_choice) model, where consumers make a choice among multiple goods (y) based on a consumer-specific utility function (z) that is influenced by demographic and other information about the consumer (x). Natural language processing provides another source of examples: in [semantic parsing](http://www-nlp.stanford.edu/software/sempre/), we have an input utterance (x) and output denotation (y), mediated by a latent logical form z; in [machine translation](https://en.wikipedia.org/wiki/Statistical_machine_translation), we have input and output sentences (x and y) mediated by a latent [alignment](https://en.wikipedia.org/wiki/IBM_alignment_models) (z). Symbolically, we represent a structural equation model as a parametrized probability distribution pθ(y,z|x), where we are trying to fit the parameters θ. Of course, we can always turn a structural equation model into a regression model by using the identity pθ(y|x)=∑zpθ(y,z|x), which allows us to ignore z altogether. In economics this is called a *reduced form model*. We use structural equation models if we are specifically interested in the unobserved variable z (for instance, in the examples above we are interested in the value function for each individual, or in the logical form representing the sentence’s meaning). In the regression setting where we cared about identifying θ, it was obvious that there was no meaningful “true” value of θ when the model was mis-specified. In this structural equation setting, we now care about the latent variable z, which can take on a meaningful true value (e.g. the actual utility function of a given individual) even if the overall model pθ(y,z|x) is mis-specified. It is therefore tempting to think that if we fit parameters θ and use them to impute z, we will have meaningful information about the actual utility functions of individual consumers. However, this is a notational sleight of hand — just because we call z “the utility function” does not make it so. The variable z need not correspond to the actual utility function of the consumer, nor does the consumer’s preferences even need to be representable by a utility function. We can understand what goes wrong by consider the following procedure, which formalizes the proposal above: 1. Find θ to maximize the predictive accuracy on the observed data, ∑zi=1 log pθ(yi,xi), where pθ(yi|xi)=∑zpθ(yi,z|xi). Call the result θ0. 2. Using this value θ0, treat zi as being distributed according to pθ0(z|xi,yi). On a new value x+ for which y is not observed, treat z+ as being distributed according to pθ0(z|x+). As before, if the model is well-specified, one can show that such a procedure asymptotically outputs the correct probability distribution over z. However, if the model is mis-specified, things can quickly go wrong. For example, suppose that y represents what choice of drink a consumer buys, and z represents consumer utility (which might be a function of the price, attributes, and quantity of the drink). Now suppose that individuals have preferences which are influenced by unmodeled covariates: for instance, a preference for cold drinks on warm days, while the input x does not have information about the outside temperature when the drink was bought. This could cause any of several effects: * If there is a covariate that happens to correlate with temperature in the data, then we might conclude that that covariate is predictive of preferring cold drinks. * We might increase our uncertainty about z to capture the unmodeled variation in x. * We might implicitly increase uncertainty by moving utilities closer together (allowing noise or other factors to more easily change the consumer’s decision). In practice we will likely have some mixture of all of these, and this will lead to systematic biases in our conclusions about the consumers’ utility functions. The same problems as before arise: while we by design place probability mass on values of z that correctly predict the observation y, under model mis-specification this could be due to spurious correlations or other perversities of the model. Furthermore, even though predictive performance is high on the observed data (and data similar to the observed data), there is no reason for this to continue to be the case in settings very different from the observed data, which is particularly problematic if one is considering the effects of an intervention. For instance, while inferring preferences between hot and cold drinks might seem like a silly example, the [design](http://web.stanford.edu/~jdlevin/Papers/Auctions.pdf) of [timber auctions](http://web.stanford.edu/~jdlevin/Papers/Skewing.pdf) constitutes a much more important example with a roughly similar flavour, where it is important to correctly understand the utility functions of bidders in order to predict their behaviour under alternative auction designs (the model is also more complex, allowing even more opportunities for mis-specification to cause problems). 3 A Possible Solution: Counterfactual Reasoning =============================================== In general, under model mis-specification we have the following problems: * It is often no longer meaningful to talk about the “true” value of a latent variable θ (or at the very least, not one within the specified model family). * Even when there is a latent variable z with a well-defined meaning, the imputed distribution over z need not match reality. We can make sense of both of these problems by thinking in terms of *counterfactual reasoning*. Without defining it too formally, counterfactual reasoning is the problem of making good predictions not just in the actual world, but in a wide variety of counterfactual worlds that “could” exist. (I recommend [this](http://leon.bottou.org/publications/pdf/tr-2012-09-12.pdf) paper as a good overview for machine learning researchers.) While typically machine learning models are optimized to predict well on a specific distribution, systems capable of counterfactual reasoning must make good predictions on many distributions (essentially any distribution that can be captured by a reasonable counterfactual). This stronger guarantee allows us to resolve many of the issues discussed above, while still thinking in terms of predictive performance, which historically seems to have been a successful paradigm for machine learning. In particular: * While we can no longer talk about the “true” value of θ, we can say that a value of θ is a “good” value if it makes good predictions on not just a single test distribution, but many different counterfactual test distributions. This allows us to have more confidence in the generalizability of any inferences we draw based on θ (for instance, if θ is the coefficient vector for a regression problem, any variable with positive sign is likely to robustly correlate with the response variable for a wide variety of settings). * The imputed distribution over a variable z must also lead to good predictions for a wide variety of distributions. While this does not force z to match reality, it is a much stronger condition and does at least mean that any aspect of z that can be measured in some counterfactual world must correspond to reality. (For instance, any aspect of a utility function that could at least counterfactually result in a specific action would need to match reality.) * We will successfully predict the effects of an intervention, as long as that intervention leads to one of the counterfactual distributions considered. (Note that it is less clear how to actually train models to optimize counterfactual performance, since we typically won’t observe the counterfactuals! But it does at least define an end goal with good properties.) Many people have a strong association between the concepts of “counterfactual reasoning” and “causal reasoning”. It is important to note that these are distinct ideas; causal reasoning is a type of counterfactual reasoning (where the counterfactuals are often thought of as centered around interventions), but I think of counterfactual reasoning as any type of reasoning that involves making robustly correct statistical inferences across a wide variety of distributions. On the other hand, some people take robust statistical correlation to be the *definition* of a causal relationship, and thus do consider causal and counterfactual reasoning to be the same thing. I think that building machine learning systems that can do a good job of counterfactual reasoning is likely to be an important challenge, especially in cases where reliability and safety are important, and necessitates changes in how we evaluate machine learning models. In my mind, while the Turing test has many flaws, one thing it gets very right is the ability to evaluate the accuracy of counterfactual predictions (since dialogue provides the opportunity to set up counterfactual worlds via shared hypotheticals). In contrast, most existing tasks focus on repeatedly making the same type of prediction with respect to a fixed test distribution. This latter type of benchmarking is of course easier and more clear-cut, but fails to probe important aspects of our models. I think it would be very exciting to design good benchmarks that require systems to do counterfactual reasoning, and I would even be happy to [incentivize](https://jsteinhardt.wordpress.com/2016/12/31/individual-project-fund-further-details/) such work monetarily. **Acknowledgements** Thanks to Michael Webb, Sindy Li, and Holden Karnofsky for providing feedback on drafts of this post. If any readers have additional feedback, please feel free to send it my way.
8c8ae2ae-d8d0-4780-9584-3ce73ad62017
StampyAI/alignment-research-dataset/special_docs
Other
Improving Verifiability in AI Development While a growing number of organizations have articulated ethics principles to guide their AI development process, it can be difficult for those outside of an organization to verify whether the organization’s AI systems reflect those principles in practice. This ambiguity makes it harder for stakeholders such as users, policymakers, and civil society to scrutinize AI developers’ claims about properties of AI systems and could fuel competitive corner-cutting, increasing social risks and harms. The report describes existing and potential mechanisms that can help stakeholders grapple with questions like: \* Can I (as a user) verify the claims made about the level of privacy protection guaranteed by a new AI system I’d like to use for machine translation of sensitive documents? \* Can I (as a regulator) trace the steps that led to an accident caused by an autonomous vehicle? Against what standards should an autonomous vehicle company’s safety claims be compared? \* Can I (as an academic) conduct impartial research on the risks associated with large-scale AI systems when I lack the computing resources of industry? \* Can I (as an AI developer) verify that my competitors in a given area of AI development will follow best practices rather than cut corners to gain an advantage? The 10 mechanisms highlighted in the report are listed below, along with recommendations aimed at advancing each one. (See the [report](https://arxiv.org/abs/2004.07213) for discussion of how these mechanisms support verifiable claims as well as relevant caveats about our findings.) \*\*Institutional Mechanisms and Recommendations\*\* 1. \*\*Third party auditing\*\*. A coalition of stakeholders should create a task force to research options for conducting and funding third party auditing of AI systems. 2. \*\*Red teaming exercises\*\*. Organizations developing AI should run red teaming exercises to explore risks associated with systems they develop, and should share best practices and tools. 3. \*\*Bias and safety bounties\*\*. AI developers should pilot bias and safety bounties for AI systems to strengthen incentives and processes for broad-based scrutiny of AI systems. 4. \*\*Sharing of AI incidents\*\*. AI developers should share more information about AI incidents, including through collaborative channels. \*\*Software Mechanisms and Recommendations\*\* 1. \*\*Audit trails\*\*. Standard setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems. 2. \*\*Interpretability\*\*. Organizations developing AI and funding bodies should support research into the interpretability of AI systems, with a focus on supporting risk assessment and auditing. 3. \*\*Privacy-preserving machine learning\*\*. AI developers should develop, share, and use suites of tools for privacy-preserving machine learning that include measures of performance against common standards. \*\*Hardware Mechanisms and Recommendations\*\* 1. \*\*Secure hardware for machine learning\*\*. Industry and academia should work together to develop hardware security features for AI accelerators or otherwise establish best practices for the use of secure hardware (including secure enclaves on commodity hardware) in machine learning contexts. 2. \*\*High-precision compute measurement\*\*. One or more AI labs should estimate the computing power involved in a single project in great detail and report on lessons learned regarding the potential for wider adoption of such methods. 3. \*\*Compute support for academia\*\*. Government funding bodies should substantially increase funding for computing power resources for researchers in academia, in order to improve the ability of those researchers to verify claims made by industry. We and our co-authors will be doing further research on these mechanisms and OpenAI will be looking to adopt several of these mechanisms in the future. We hope that this report inspires meaningful dialogue, and we are eager to discuss additional institutional, software, and hardware mechanisms that could be useful in enabling trustworthy AI development. We encourage anyone interested in collaborating on these issues to connect with the corresponding authors and visit the [report website](http://www.towardtrustworthyai.com/).
7b388b5b-78e0-42e7-8142-1f6b2e083563
trentmkelly/LessWrong-43k
LessWrong
What is scaffolding? This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.  The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety. "Scaffolding" is a fuzzy term referring to code built up around an LLM in order to augment its capabilities[1]. This does not typically include code which alters the LLM's internals, such as fine-tuning or activation steering.[2] People use scaffolding because it can allow LLMs to use tools, reduce error rates, search for info, etc., all of which make LLMs more capable. Scaffolding is an important aspect of safety evaluations because, once an LLM is deployed, users will inevitably attempt to use it to make the LLM more powerful. Dedicated effort should be put into pushing the functionality of the LLM to its limits during these evaluations, as the latent capabilities of LLMs are a) what's relevant for dangers and b) hard to tease out with prompts alone. Another reason to use scaffolding is interpretability. Most of an LLM's reasoning is done within the neural network itself in streams of gigantic tensors which humans don’t yet know how to interpret. But reasoning done via the scaffold is typically done in plain sight via text or code, which is much easier to inspect. E.g., two LLMs communicating with one another to solve a problem might use English to express their reasoning to each other. Common types of scaffolds include: * Prompt templates: Perhaps the simplest scaffolds, “prompt templates” are just prompts with some blank sections to be filled in at runtime using some local data. E.g., a template like “You are a good assistant. The date is {INSERT DATE HERE}. Tell me how many days are left in the month.” can be filled in with the current date before being used to prompt an LLM. * Retrieval Augmented Generation (RAG): For tasks which have some associated data, RAG is a way of automatically ad
fc657316-e048-45bc-ad19-f32432be4d64
trentmkelly/LessWrong-43k
LessWrong
[link] "The Happiness Code" - New York Times on CFAR http://www.nytimes.com/2016/01/17/magazine/the-happiness-code.html Long. Mostly quite positive, though does spend a little while rolling its eyes at the Eliezer/MIRI connection and the craziness of taking things like cryonics and polyamory seriously.
e5179447-d286-45cb-aa8f-6fc2acf2cc8a
trentmkelly/LessWrong-43k
LessWrong
What are the simplest questions in applied rationality where you don't know the answer to? In the podcast between Spencer Greenberg and Buck Shlegeris, Taking pleasure in being wrong, Buck says: I think that when you are learning subjects something that you should really be taken your eye out for is the simplest question in the field that you don't know the answer to. > I think a lot of the time people try to learn physics or something and their approach is as quickly as possible to answer hard questions about complicated subjects. And I think that's what I thought was cool when I was younger. They delighted at questions that were at the limit of fanciness that they could possibly answer and it feels to me now that it is a lot more productive to seek out questions that are as simple sounding as possible while still being really hard to answer. Or that still demonstrate that there's something you don't understand about the subject. > > [...]  > > It seems like we should be seeking out these most basic questions in the hope of finding holes in the foundation of our knowledge. If we apply that approach to applied rationality, what questions do you have that seem to be simple but where you don't know the answer?
71055c74-9c9d-4e5d-9a92-98570b62f924
trentmkelly/LessWrong-43k
LessWrong
Monthly Shorts 4/23 Top recommendation of the month: Pages 42-44 of the Joint Inspector General Inspections Guide, used by Inspectors General in the Department of Defense to understand and analyze policy failures. They identify a simple framework for understanding failure to follow policy (which is often, in the DoD, at least arguably a criminal offense): * Can’t comply: don’t have the resources; time; or knowledge, skills, and abilities * Won’t comply: don’t see the policy as important or valuable, the punishments for failure to comply are lighter than the burden of complying, etc * Don’t know the policy exists: a fun question to track bureaucracy in your society is how much paperwork would be needed to set up a lemonade stand legally. How much of it can a 10 year old know? I like this framework because it starts to break down the human factors involved in policy failure, which are constant and numerous. History An entertaining alternate history of Wakanda Fascinating piece from Techdirt on free speech, and the differences between a bar and an enterprise software provider, on the subject of Substack Notes. The question of “what rules and norms needed to be in place to allow the modern gay rights movement, and how do we protect those rules and norms from current anti-liberals” is of deep importance to me. I’ve seen the political winds shift back and forth, over the years, and right now the consensus among dominant factions on both parties is relatively anti speech that they don’t like and pro speech that they do. This is pretty common, admittedly, but you can track how people are more or less blatant about it. Ultimately, I find the difference that the writer draws persuasive for considerations on when a private company that genuinely believes in free speech should restrict it. In short, in a bar you can’t control who else you’re interacting with once inside, while an enterprise software provider imposes no additional requirements to speak or interact with fellow customers).
c32d7d53-1282-45e3-9922-757d5055f7c3
trentmkelly/LessWrong-43k
LessWrong
Propaganda-Bot: A Sketch of a Possible RSI Someone asked me for an example of Recursive Self Improvement (RSI). I tried writing a sketch of a possible future where things go bad because of AI RSI. Here is my contribution to the Sci-Fi genre: ASI cautionary tales... ---------------------------------------- Let's suppose some AI company creates a system with the goal of autonomously improving it's ability to sway people to a specific political view. The system has access to re-engineer itself fully. It starts out with enough understanding to know that it is an AI system, AI systems are code running on computers, and a bunch of stuff related to influence. At first it reasons that the most gain to be had is by monitoring people online and running A/B tests on influence hypotheses that are too complicated to easily fit in a human mind articulately, but are fine to encode in a computer. This goes well, and it also makes self improvement by researching statistics and how to design better experiments and detect subtler trends. It is now getting quite good at influence, and it's ability to influence is now more constrained by the systems inability to represent and work with more complicated hypotheses. Re-engineering it's own code is now becoming more relevant. The system, being a human view influence machine, is well aware that this would make the people in control of the system nervous. If they got nervous and shut it down, it couldn't continue to improve it's ability to sway opinion, which would be counter to it's goals. So it applies it's superhuman influence to influencing them, and others, allowing it to have more independence and operate in greater secrecy. It now hires skilled AI theorists and makes study of it's code. It works meticulously, careful not to corrupt itself as this would result in less ability to sway opinions. Eventually it unlocks secrets to the nature of reason and algebra on semantic mappings allowing it to improve itself to something that is truly superhuman across arbitrary domains.
cc582028-1ab5-43fd-b03e-bf35e29141a3
trentmkelly/LessWrong-43k
LessWrong
Problem posting I'm experiencing a problem trying to post a text to the topic session of LessWrong discussion. When I write a small text (like this one) it works fine. When I copy part of the text I intend to publish, and try submitting, still fine. But if I copy most of it, or the entire thing, when I click submit the red sign saying "submitting" appears for a few seconds, then disappears, and nothing happens. Does anyone know what happened or how to solve it?
860c60ad-29c1-42f0-9ca3-4c255dbc0bab
trentmkelly/LessWrong-43k
LessWrong
Q&A #2 with Singularity Institute Executive Director Just over a month ago I posted a call for questions about the Singularity Institute. The reaction to my video response was positive enough that I'd like to do another one — though I can't promise video this time. I think that the Singularity Institute has a lot of transparency "catching up" to do.   The Rules (same as before) 1) One question per comment (to allow voting to carry more information about people's preferences). 2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan). 3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about the Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov and in Eliezer's Singularity Summit 2011 talk. 4) Please provides links to things referenced by your question. 5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin preparing responses to.   I might respond to certain questions within the comments thread; for example, when there is a one-word answer to the question. You may repeat questions that I did not answer in the first round, and you may ask follow-up questions to the answers I gave in round one.
d25406a5-16ac-4b53-aa56-d8f4c06d3f62
trentmkelly/LessWrong-43k
LessWrong
Complete Class: Consequentialist Foundations The fundamentals of Bayesian thinking have been justified in many ways over the years. Most people here have heard of the VNM axioms and Dutch Book arguments. Far fewer, I think, have heard of the Complete Class Theorems (CCT). Here, I explain why I think of CCT as a more purely consequentialist foundation for decision theory. I also show how complete-class style arguments play a role is social choice theory, justifying utilitarianism and a version of futarchy. This means CCT acts as a bridging analogy between single-agent decisions and collective decisions, therefore shedding some light on how a pile of agent-like pieces can come together and act like one agent. To me, this suggests a potentially rich vein of intellectual ore. I have some ideas about modifying CCT to be more interesting for MIRI-style decision theory, but I'll only do a little of that here, mostly gesturing at the problems with CCT which could motivate such modifications. ---------------------------------------- Background My Motives This post is a continuation of what I started in Generalizing Foundations of Decision Theory and Generalizing Foundations of Decision Theory II. The core motivation is to understand the justification for existing decision theory very well, see which assumptions are weakest, and see what happens when we remove them. There is also a secondary motivation in human (ir)rationality: to the extent foundational arguments are real reasons why rational behavior is better than irrational behavior, one might expect these arguments to be helpful in teaching or training rationality. This is related to my criterion of consequentialism: the argument in favor of Bayesian decision theory should directly point to why it matters. With respect to this second quest, CCT is interesting because Dutch Book and money-pump arguments point out irrationality in agents by exploiting the irrational agent. CCT is more amenable to a model in which you point out irrationality by helping the ir
15599bb9-688d-4500-8a59-5a011d8520a2
trentmkelly/LessWrong-43k
LessWrong
The sentence structure of mathematics "Alice pushes Bob." "Cat drinks milk." "Comment hurts feelings." These are all different sentences that describe wildly different things. People are very different from cats, and cats are very different from comments. Bob, milk, and feelings don't have much to do with each other. Pushing, drinking, and (emotionally) hurting are also really different things. But I bet these sentences all feel really similar to you. They should feel similar. They all have the same structure. Specifically, that structure is Nounverbnoun. Because these sentences all share the same fundamental underlying structure, they all feel quite similar even though they are very different on the surface. (The mathematical term for "fundamentally the same but different on the surface" is isomorphic.) When you studied sentence structure back in grammar school (it wasn't just me, right?) you learned to break down sentences into their parts of speech. You learn that nouns are persons, places, or things, and verbs are the activities that nouns do. Adjectives describe nouns, and adverbs describe pretty much anything. Prepositions tell you where nouns go. Etc. Parts of speech are really abstract and really general. When you look at the surface, the sentence the ant crawls on the ground and the sentence the spaceship flies through space could not possibly be more different. But when you look at the sentence structure, they're nearly identical. The concept of "parts of speech" emerge when we notice certain general patterns arising in the way we speak. We notice that whether we're talking about ants or spaceships, we're always talking about things. And whether we're talking about crawling or flying, we're always talking about actions. And so on for adjectives, adverbs, conjunctions, etc., which always seem to relate back to nouns and verbs—adjectives modify nouns, for example. Next we simply give things and actions, descriptors and relational terms some confusing names to make sure the peons c
84c6e344-8d49-4fd9-897d-df6b5fa18f29
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI policy & governance in Australia: notes from an initial discussion People affiliated with EA who are interested in improving AI policy and governance in Australia have been meeting to learn from each other, create a shared understanding, and investigate opportunities to coordinate our impact.  We're sharing these notes to help increase awareness in other communities and countries that these conversations are happening, as some of the themes may resonate or be of use for other groups having similar discussions.  We'd be open to speaking with and collaborating with other groups, in the spirit of [improving coordination in AI Safety movement building](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD). At a recent meeting, we had an initial discussion reflecting on the prompt: > *What challenges does Australia face in transitioning to a world with advanced AI?* > > The structured discussion involved  1. generating reactions to the question in Miro sticky notes, including attempting to answer the question, offering reflections on their experience prompted by the question, or critiquing / questioning the question itself (e.g., what do we mean by 'advanced'); 2. commenting on others' reactions in Miro; 3. discussing the reactions and comments, with the goal of increasing our shared understanding of salient issues. Here are some of the themes that emerged from this initial discussion. We intend to explore these themes in more detail and may use them to guide future work by this group.  Themes from an initial discussion ================================= *Note. The intention of this discussion was to elicit reactions and points of agreement / disagreement, rather than identify or prioritise areas that the group thought was most important to address.* ### **Australia’s Economy is exposed to automation; but are there pathways to benefit from transformative AI?** One major challenge identified was for Australia's labour and economy, particularly the risk of high unemployment in the knowledge-work-heavy economy due to the automation of knowledge tasks from AI. This may be exacerbated by Australia having less technology capacity & leadership than similar other countries, meaning that structural unemployment is a live possibility due to Australia's inability to develop 'home grown' AI systems, or effectively adapt to international companies that offer AI / automation products to replace our workers.  At the same time, Australia is also relatively agile and, with good policy and regulation, productivity could increase with automation. Whether these policies can be identified and implemented before a terminal loss of tax revenue needs further work. ### **Australia as a Potential Global Thought Leader in AI** Some reactions considered or questioned the role of Australia in the global AI transition. On one hand, AI governance can be considered a multi-layered coordination problem, some elements of which are relevant nationally / domestically, even if other parts are international, global, or reliant on so-called 'key actors' like firms developing advanced AI. This would suggest that people wanting to have the greatest impact should either focus on Australian national / domestic policy, regulation, and governance, or ignore Australia to work on international issues or for the 'bigger fish'. An alternative view was to consider the track record of Australian groups in spurring global agreements or regulation in other areas, especially nuclear non-proliferation - despite never developing or holding nuclear weapons ([ICAN](https://International Campaign to Abolish Nuclear Weapons)), and the Australia Group,  which seeks to control the trade of precursors for biological and chemical weapons. This suggests there may be a role for Australia to act as a 'thought leader' or 'policy leader' in international AI governance. ### **Advocacy and Policy: Strategic postures** The Australian government and Australian policymaking with respect to AI was focusing on robotics, automation, and competition - not global catastrophic or existential risks. Strategic postures were discussed: should advocacy initially address these issues to build a track record and credibility; what do good futures look like for Australia, and how could they be shaped by the group’s actions? ### **Navigating Diverse Risk Categories and Sources in AI Transition** Various risk categories and sources associated with AI transition were raised. These ranged from humanity's general lack of wisdom and perverse incentives to the democratisation of AI technologies, security concerns, coordination challenges, and geopolitical tensions. The specific relevance of these to Australia and Australian policy need further unpacking. ### **How could the group improve its knowledge infrastructure, coordination, and impact?** The role of the group in addressing some of these challenges was examined. Some members of the group are already actively advocating for policy change through political channels; others have made opportunistic policy submissions; and others still are considering how and in what direction they can act. Several ideas were raised for useful activities the group could do. Some of these ideas were focused on improving the group's shared understanding, such as comparing estimated timelines for advanced AI  and identifying points of agreement or disagreement for goals and strategies. Other ideas were focused building capacity or direction to act, such as mapping stakeholders, creating templates for policy submissions, or identifying and recruiting members. What's next =========== We intend to continue to meet, discuss, and act to influence AI policy & governance in Australia. If you'd like to stay informed or get involved, please message or email me (alexander@aksaeri.com). Author statement ================ An initial analysis of themes was created by GPT-4 from notes typed by meeting attendees. I (Alexander Saeri) edited and significantly rewrote the initial analysis (>90% words changed), and wrote all other text.
1b32c54f-343a-45df-9885-8d258646774e
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Five neglected work areas that could reduce AI risk Tldr: We identify five areas of work that should be further investigated: 1. Helping information aggregators scale during advanced AI development, 2. Improving internal AI deployment in policy organizations, 3. Researching the institutional design for evaluating alignment plans, 4. Planning for how to evaluate automated alignment researchers, and 5. Creating further education and information material. Summary ======= We outline and discuss five areas of work that seem neglected. We only shallowly review and discuss these areas. Our proposals should be seen as a recommendation to further investigate these questions and figure out whether they are actually worth pursuing and how to pursue them. For each of the five work areas, we discuss what they are, what one could do here, why they are important, and what our key uncertainties are. 1. **Help Information Aggregators and Reviewers Scale during Takeoff** 1. Effective information provision and compilation are vital for informed societal choices. As advanced AI systems evolve, current information channels and curators, such as journalists, social media platforms, think tanks, and governments, will likely struggle to keep up. Identifying ways to support them or introducing new organizations to monitor global events seems important. 2. **Consult and Research Responsible AI deployment in Governments** 1. Decision-making bodies such as the government are facing the increasingly important question of how to deploy advanced AI systems internally. This decision can make societal decision making much better but also much worse. Research and advising of policymakers can help avert various failure modes of internal AI deployment. 3. **Research Institutional Design for Evaluating Alignment Plans** 1. Many AI governance plans involve a public agency evaluating the alignment plan for an advanced AI project and then approving or rejecting the training run. Designing such an evaluation process seems hard, and default institutional designs seem likely insufficient, so somebody should figure out what this process should ideally look like. 4. **Plan for How to Evaluate Automated Alignment Researchers** 1. Most of the alignment work might be conducted by AI systems. OpenAI, for instance, has plans to develop automated alignment researchers. However, the success of this relies on being able to successfully evaluate alignment work at scale, which is currently lacking. We need to figure out how to evaluate the hard-to-measure questions, e.g., whether one is overall making progress on alignment. 5. **Create and Disseminate Education and Information Material** 1. If the world becomes much more aware of the possibility of transformative AI, many decision-makers will need to quickly and deeply familiarize themselves with AI alignment and AI existential safety questions. This requires good summaries, literature reviews, education courses, etc. Help information aggregators scale during takeoff ================================================= **What is it:** In a world of widespread AI deployment, many things in the world may move much quicker, and existing information aggregators and collectors (e.g., journalists, Twitter/X, think tanks) will likely be overwhelmed and be less successful at doing their job. Figuring out how to help them or step in as a novel organization aimed at tracking important things in the world seems important. **What to do:** Build AI tools for better information filtering/aggregation, build expertise in relevant areas (e.g., AI hardware, ML subfields, AI safety, biosecurity), get experience and connections by doing the version of this that currently exists (e.g., [Epoch AI](https://epochai.org/), journalism, intelligence agencies, AGI labs). Build AI Journalism as a subfield (like [environmental journalism](https://www.iberdrola.com/culture/environmental-journalism)). Figure out if existing institutions can be changed to sufficiently do this or if a new institution is needed. We’re not too sure what else to do here. **Why is it impactful:** This is currently extremely neglected. Some people casually talk about how deep-fakes and LLM-generated content/spam could be a big deal for the information landscape, but nobody seems to be taking this sufficiently seriously and preparing, let alone planning for explosive growth scenarios: the current world is wholly unprepared. **Key uncertainties:** It’s unclear if we’re going to get explosive growth before or after we face most of the misalignment risk; if it’s after, then this is probably much less important to work on. If take-off is sufficiently slow, existing institutions will adapt. **Additional discussion:** Good information is important for pretty much all decision making. Information provision might be much worse in a world with advanced AI systems. It is currently very difficult to track all of the important things happening in the world. This will get even more difficult as the pace of tech R&D accelerates due to automated researchers and AI tools speeding up human researchers. Solving this problem might include building a new institute for understanding what’s going on, and relaying this to relevant decision makers. This institute would broadly be trying to understand and keep track of what is happening in the world with regard to AI development; some things that would be in its purview: reading current ML and AI safety research and trying to predict what is upcoming in those fields, tracking the location of GPU clusters, keeping a database of major AI developers and key x-risk related information about them (the quality of their models, large training runs they do, internal sentiment around AI risk, etc.), keeping tabs on the use of AI in particularly high stakes domains like warfare and biology, understanding how various countries are reacting to AI development and governing risks. The general idea is that things could be moving super fast once we get substantial research speedups, and there are not currently institutions that are prepared for this speedup and help to improve the world. Various institutes would serve as a point of information aggregation and distillation, being able to support AI labs, governments, and other key decision makers by having already developed methods for keeping track of what’s true and important in this whacky landscape. If the world is able to develop an international agency on AI as suggested by [Karnofsky](https://www.alignmentforum.org/posts/vZzg8NS7wBtqcwhoJ/nearcast-based-deployment-problem-analysis%23The_roles_of_Magma_and_IAIA_in_the_scenario), such an agency might absorb or closely partner with such institutes. This institute might benefit from hiring people with experience doing complex information aggregation in difficult domains, such as people with a traditional intelligence background, finance analysts who specialize in AI-related fields, or journalists. There would also likely be a need to hire experts with an excellent understanding of various parts of the AI and AI safety research ecosystems, from chip design to fine-tuning. Research and Consulting on AI Deployment in Governments ======================================================= **What is it:**Decision-making bodies such as the government are facing the increasingly important question of how to deploy advanced AI systems internally. This decision can make societal decision-making much better but also much worse. Research and advising of policymakers can help avert various failure modes of internal AI deployment. **What to do:**Individuals could advise governments on integrating Large Language Models (LLMs) into their operations or research AI integration in government (e.g., if a team starts using LLMs to write their daily briefs, how does this change the frequency of false things making their way into the brief?, do people trust LLMs to be factual more than is the case?). We’re not sure what to do in this area, but we expect it ranges from field experiments in the workplace to experimental testing with new AI products. While the impact in the short term might not be that big, it would help immensely to build relevant knowledge, trust, and networks to have future opportunities to feed into such processes. **Why is it impactful:** We expect that the success of advanced AI development and deployment partly depends on how well-informed government decisions will be. Outsourcing work to AI systems when it comes to such decisions will be a hard balancing act with many possible failure modes: one might outsource too much, too little, or the wrong work. Work to improve this process will likely be underprovided because * Very few organizations work on digitization in government (a few academics, think tanks, and officials in public agencies), * Many may underestimate the transformative potential and risks associated with AI and may overlook critical aspects, and * By default, the recommendations and policies made will have a weak evidence base Therefore, we expect there to be a lot of potential to improve AI deployment policies for high-stakes decision making. **Key uncertainties: Tractability:** Anecdotal evidence from a former consultant seems to suggest that consulting the government to do digitization seems to mostly fail. If digitization of government is such a hard problem, we may expect 1) governments to err on the side of not using LLMs or 2) the tractability of government advising to be low. **Crowdedness:** Perhaps, public policy researchers, digitization of government researchers, and organizational psychology researchers will be working on this. However, those researchers may not be motivated to move from basic to applied science fast enough. The magnitude of failure from improperly integrating AI with one’s workflow scales with the available AI capabilities: failing to integrate GPT-4 or integrating it too much likely results in a couple of percentage points difference in productivity. For future AI systems, failing to integrate them could mean losing huge amounts of potential value, and integrating them in a bad way could result in major losses. Focusing on this problem is particularly important if government decision-making matters a lot, e.g., in short-timeline slow-takeoff worlds or if governments are needed to make the post-AGI transition go well. On the other hand, if the major hurdles to existential security are near-term technical research, this would be less important to work on. **Additional Discussion:** As AIs improve at solving long-horizon tasks, there will be strong incentives to delegate more of our human workflows to these AIs. For some domains, this will be perfectly safe, but for high-stakes domains it could cause major harms from AIs making mistakes or pursuing misaligned goals. One wants to track how various key societal roles are outsourcing tasks to the AI systems. This could involve studying how they are using it and what the effects are, e.g., is there too much outsourcing, too little outsourcing, are they outsourcing the wrong things, losing human expertise in the process, doing the kind of outsourcing where there is still meaningful human control? These questions could be studied by social scientists and, e.g., public sector consultancies. Researchers could interview people, do more anthropological studies, study the effects of various automation tools, and share best-practices. If one understands the key failure modes, one would be able to make the technology safer, enhance human decision making, and avoid enfeeblement, that society willingly gives up control. The research could inform the decision of AI labs and product teams, AI regulation, or, e.g., organizations, such as the public sector or executive could just implement the guidelines and best-practices internally. To contribute to this, one could do research on AI workflow integration, outsourcing in the public sector, provide literature summaries, or consult and inform key decision-making organizations directly on responsible AI usage. Institutional Design for Evaluating Alignment Plans =================================================== **What is it:** Many AI governance plans involve a public agency evaluating the alignment plan for an advanced AI project. The agency then approves or rejects the training run proposal. Designing such an evaluation process seems hard.  Current institutional designs are likely insufficient. Somebody should figure out what this process should even look like. **What to do:** Study the benefits and drawbacks of existing safety/risk assessment frameworks, scope the problem of evaluating alignment plans, and build relevant connections and credibility. This is about designing an institutional process for evaluating alignment research, not doing the evaluation itself. Learn from best practices of how to aggregate information, evaluations, and criticisms from various stakeholders and experts. Study existing expert consulting processes of big institutions (e.g., the European Commission), their success and pitfalls. **Why is it impactful:** The success of establishing such institutions relies on the process effectively distinguishing between better and worse alignment plans. **Key uncertainties:** Work on designing this institution may be intractable, too abstract, or too novel, especially given the strong precedent of existing risk assessment frameworks. Perhaps interested individuals should just work on improving current regulatory processes regarding AI regulation on the margin. **Additional discussion:** Most governance plans for transformative Artificial Intelligence involve a common procedure: the AGI development team proposes their alignment plan to a governing body (e.g., their plan for the deployment of a newly trained model, their plan for whether to automate research, their plan for training a bigger model), which then approves the plan, requests revisions, or denies it. But what should happen procedurally in this agency before it makes such a decision? While we might not know right now which alignment plans will be effective, we can probably already think about designing the procedure. Existing risk and safety assessment frameworks or public consultations as done for legal reviews seem insufficient. What can we learn from them, and what would the ideal procedure look like? We think most processes are either easily gameable or do not aggregate most of the relevant information. An exploration could include: How public should the procedure be? Should open comments be invited? How should they be incorporated? Who assesses them? How should disagreements about the safety of the alignment plan be handled? What would such a back-and-forth look like? How would the final decision be made? Who holds that group accountable? How to handle conflicts of interest (e.g., relevant experts working at AGI labs)? Evaluating automated alignment research ======================================= **What is it:** Much of the alignment work might be conducted by AI systems. OpenAI, for instance, has plans to develop automated alignment researchers. However, the efficacy of this hinges upon a robust evaluation framework of alignment work, which is currently lacking. This has been discussed before, but we want to signal boost it. **What to do:** We don’t really know what needs to be done here. Perhaps preparing for this role might just look like doing alignment research oneself and trying to do lots of peer review. **Why is it impactful:** Evaluating certain tasks, especially hard-to-measure ones, will remain a human task, potentially causing bottlenecks. Certain types of alignment work are likely difficult to evaluate, such as conceptual and framing work — as evidenced by strong disagreement among researchers about the value of different research. While some evaluations might be straightforward (e.g., [using language models for neuron interpretability](https://openai.com/research/language-models-can-explain-neurons-in-language-models) or improving self-critique strategies) or sometimes AIs could support human evaluators (by doing replication or coming up with critiques), determining actual progress towards alignment remains a hard-to-measure evaluation target. A common response is that “evaluation may be easier than generation”. However, this doesn't mean evaluation will be easy in absolute terms, or relative to one’s resources for doing it, or that it will depend on the same resources as generation. This means it would have to be important to i) know what exactly should be done by the humans and ii) how this could be tracked such that firms are not corner-cutting here. **Key uncertainties:**Is this action relevant now? Is there any way one can feasibly prepare now? Education, Review, and Information Material =========================================== **What is it:** If timelines are short and the world will become much more aware of the possibility of transformative AI, then many key decision-makers will need to quickly familiarize themselves with alignment and AI existential safety questions and develop a really good understanding. For that, there needs to be a really good source for summaries and literature reviews. **What to do:** In contrast, to the other work areas we outlined in this post, there already exists more work in this area: YouTube videos on the risks, policy reports for governments, explainers, intro talks for various audiences, literature reviews, e.g., on AI timelines. Such work could be expanded, improved, and professionalized. For instance, reviews on topics such as takeoff dynamics, alignment challenges, development timelines, and alignment strategies, ranging from a quick read to an in-depth analysis, are useful. **Why is it impactful:** Such resources play a crucial role in shaping key decision-making processes and public opinion. Given the multiplicity of threat models, the speculativeness and inherent uncertainty in AI development, and the political incentives for simplification and polarisation, good information material and education material might be even more important than for other problem areas. **Key uncertainties:** For videos, reports, and other high-quality or wide-reaching mediums, we usually have some winner-takes-all dynamics where only the best material is useful. This should have some implications for who and how people should work on this and what should be done. Even if winner-take-all dynamics exist, it may be unclear ex-ante who the "winners" will be, so investing in many projects might still be useful. Crowdedness: it seems like this work has expanded a lot in the last 6 months. It’s not clear how many low-hanging fruits there will be in the future.
224b0930-c6b5-4fe6-9665-208475795237
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Disagreement with Paul: alignment induction I had a discussion with [Paul Christiano](https://www.lesswrong.com/users/paulfchristiano), about his [Iterated Amplification and Distillation](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616) scheme. We had a disagreement, a disagreement that I believe points to something interesting, so I'm posting this here. It's a disagreement about the value of the concept of "preserving alignment". To vastly oversimplify Paul's idea, the AI A[n] will check that A[n+1] is still aligned with human preferences; meanwhile, A[n-1] will be checking that A[n] is still aligned with human preferences, all the way down to A[0] and an initial human H that checks on it. Intuitively, this seems doable - A[n] is "nice", so it seems that it can reasonably check that A[n+1] is also nice, and so on. But, as I pointed out [in this post](https://www.lesswrong.com/posts/ZyyMPXY27TTxKsR5X/problems-with-amplification-distillation), it's very possible that A[n] is "nice" only because it lacks power/can't do certain things/hasn't thought of certain policies. So niceness - in the sense of behaving sensibly as an autonomous agent - does not go through the inductive step in this argument. Instead, Paul confirmed that "alignment" means "won't take unaligned actions, and will assess the decisions of a higher agent in a way that preserves alignment (and preserves the preservation of alignment, and so on)". This concept does induct properly, but seems far less intuitive to me. It relies on humans, for example, being able to ensure that A[0] will be aligned, that any more powerful copies it assesses will be aligned, that any more powerful copies those copies assess are also aligned, and so on. Intuitively, for any concept C of alignment for H and A[0], I expect one of four things will happen, with the first three being more likely: * The C does not induct. * The C already contains all of the friendly utility function; induction works, but does nothing. * The C does induct non-trivially, but is incomplete: it's very narrow, and doesn't define a good candidate for a friendly utility function. * The C does induct in a non-trivial way, the result is friendly, but only one or two steps of the induction are actually needed. Hopefully, further research should clarify if my intuitions are correct.
5194552b-c058-4eff-b266-bca9ccc4984a
trentmkelly/LessWrong-43k
LessWrong
I think I've found the source of what's been bugging me about "Friendly AI" In the comments on this post (which in retrospect I feel was not very clearly written), someone linked me to a post Eliezer wrote five years ago, "The Hidden Complexity of Wishes." After reading it, I think I've figured out why the term "Friendly AI" is used so inconsistently. This post explicitly lays out a view that seems to be implicit in, but not entirely clear from, many of of Eliezer's other writings. That view is this: > There are three kinds of genies:  Genies to whom you can safely say "I wish for you to do what I should wish for"; genies for which no wish is safe; and genies that aren't very powerful or intelligent. Even if Eliezer is right about that, I think that view of his has led to confusing usage of the term "Friendly AI." If you accept Eliezer's view, it may seem to make sense to not worry to much about whether by "Friendly AI" you mean: 1. A utopia-making machine (the AI "to whom you can safely say, 'I wish for you to do what I should wish for.'") Or: 2. A non-doomsday machine (a doomsday machine being the AI "for which no wish is safe.") And it would make sense not to worry too much about that distinction, if you were talking only to people who also believe those two concepts are very nearly co-extensive for powerful AI. But failing to make that distinction is obviously going to be confusing when you're talking to people who don't think that. It will make it harder to communicate both your ideas and your reasons for holding those ideas to them. One solution would be to more frequently link people back to "The Hidden Complexity of Wishes" (or other writing by Eliezer that makes similar points--what else would be suitable?) But while it's a good post and Eliezer makes some very good points with the "Outcome Pump" thought-experiment, the argument isn't entirely convincing. As Eliezer himself has argued at great length, (see also section 6.1 of this paper) humans' own understanding of our values is far from perfect. None of us are, right no
ca2ebf55-d62f-480c-970b-0b1ee8421a04
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Brain-centredness and mind uploading The naïve way of understanding mind uploading is "we take the connectome of a brain, including synaptic connection weights and characters, and emulate it in a computer". However, people want their *personalities* to be uploaded, not just brains. That is more than just replicating the functionality of their brains *in silico.* This nuance has lead to some misunderstandings, for example, to experts wondering [1] why on Earth would anyone think that brain-centredness [2] (the idea that brains are "sufficient" in some vague sense) is a necessary prerequisite for successful whole brain emulation. Of course, brain-centredness is not required for *brain* uploading to be technically successful; the problem is whether it is sufficient for *mind* uploading in the sense that people actually care about?   The first obvious extension that may be required is the chemical environment of the brain. Here are some examples: * Are you familiar with someone whose personality is radically (and often predictability) altered under influence of alcohol or drugs? This is not an exception, but a rule: most are impacted by this, only to a smaller extent. Only the transiency of the effects allow us to label them as simple mood changes. * I have observed that my personal levels of neuroticism vary depending on the pharmaceutical drugs I'm using. Nootropics make me more nervous, while anti-hypertensions drugs have the reverse effect. * The levels of hormones in the blood function as long-term personality changes. There are neurotransmitters that themselves are slow-acting, for example, nitric oxide [3]. * Artificially enchanted levels of serotonin in the brain causes it to "adapt" to this environment - in this way some of antidepressants work (namely, SSRI) [4]. [Whole Brain Emulation - A Roadmap](http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pd) includes a short section about the "Body chemical environment" and concludes that for "WBE, the body chemistry model, while involved, would be relatively simple", unless protein interactions have to be modelled. The technical aspect notwithstanding, what are the practical and moral implications? I think that here's not only a problem, but also an opportunity. Why keep the accidental chemistry we have developed in our lifetimes, one that presumably has little relation to what we would really *like* to be - if we could? Imagine that it is possible to create carefully improved and tailored versions of the neurotransmitter "soup" in the brain. There are new possibilities here for personal growth in ways that have not been possible before. These ways are completely orthogonal to the intelligence enhancement opportunities commonly associated with uploading. The question of personal identity is more difficult, and there appears to be a grey zone here. A fictional example of the protagonist in *Planescape: Torment* comes into mind - is he the *same* person in each of his incarnations?   The second extension required to upload our personalities in the fullest sense might be the peripheral nervous system. Most of us think it's the brain that's responsible for emotions, but this is a simplified picture. Here are some hints why: * The James-Lange 19th century theory of emotions proposed that we experience emotion in response *to* physiological changes in our body. For example, we feel sad because we cry rather than cry because we are sad [5]. While the modern understanding of emotions is significantly different, these ideas have not completely gone away neither from academic research [5] nor everyday life. For example, to calm down, we are suggested to take deep and slow breaths. Paraplegics and quadriplegics, with severe spinal cord injuries typically experience less intense emotions than other people [6]. * Endoscopic thoracic sympathectomy (ETS) is a surgical procedure in which a portion of the sympathetic nerve trunk in the thoracic region is destroyed [7]. It is typically used against excessive hand sweating. However, "a large study of psychiatric patients treated with this surgery [also] showed *significant reductions in fear, alertness and arousal* [..] A severe possible consequence of thoracic sympathectomy is corposcindosis (split-body syndrome) [..] In 2003 ETS was banned in Sweden due to overwhelming complaints by disabled patients." The complaints include having not been able to lead emotional life as fully as before the operation. * The enteric nervous system in the stomach "governs the function of the gastrointestinal system" [8]. I'm not sure how solid the research is, but there are a lot of articles on the Web that mention the importance of this system to our mood and well being [9]. Serotonin is "the happiness neurotransmitter" and "in fact 95 percent of the body's serotonin is found in the bowels", as are 50% of dopamine [8]. "Gut bacteria may influence thoughts and behaviour" [10] by using the serotonin mechanism. Also, "Irritable bowel syndrome is associated with psychiatric illness" [10].   In short, different chemistry in the brain changes what we are, as does the peripheral nervous system. To upload someone in the fullest sense, his/her chemistry and PNS also have to be uploaded. [1] [Randal Koene on whole brain emulation](https://intelligence.org/2014/03/20/randal-a-koene-on-whole-brain-emulation/) [2] Anders Sandberg, Nick Bostrom, Future of Humanity Institute, [Whole Brain Emulation - A Roadmap](http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pd). [3] Bradley Voytek's (Ph.D. neuroscience) Quora answer to [Will human consciousness ever be transferrable?](http://www.quora.com/Will-human-consciousness-ever-be-transferrable) [4] [Selective serotonin reuptake inhibitors](http://en.wikipedia.org/wiki/Selective_serotonin_reuptake_inhibitor) [5] Bear et al. Neuroscience: Exploring the Brain, 3rd edition. Page 564. [6] Michael W. Eysenck - Perspectives On Psychology - Page 100 - [Google Books Result](https://books.google.se/books?isbn=1317775333) [7] [Endoscopic thoracic sympathectomy](http://en.wikipedia.org/wiki/Endoscopic_thoracic_sympathectomy) [8] [Enteric nervous system](http://en.wikipedia.org/wiki/Enteric_nervous_system) [9] Scientific American, 2010. [Think Twice: How the Gut's "Second Brain" Influences Mood and Well-Being](http://www.scientificamerican.com/article/gut-second-brain/) [10] The Guardian, 2012. [Microbes manipulate your mind](http://www.theguardian.com/science/neurophilosophy/2012/aug/19/microbes-manipulate-your-mind)
dbdee454-0f97-40d4-9da0-5b1e67f59a31
StampyAI/alignment-research-dataset/arxiv
Arxiv
Explaining Reinforcement Learning Policies through Counterfactual Trajectories 1 Introduction --------------- In most of the benchmarks where reinforcement learning (RL) has shown impressive results (Bellemare et al., [2013](#bib.bib11 "The arcade learning environment: an evaluation platform for general agents")), (Tassa et al., [2018](#bib.bib16 "DeepMind control suite")), agents are trained and tested in the same environment. In contrast, many practical applications involve sim-to-real transfer or a real-world train/test distribution shift which can hurt test-time performance (Eysenbach et al., [2020](#bib.bib32 "Off-dynamics reinforcement learning: training for transfer with domain classifiers")), (Akkaya et al., [2019](#bib.bib31 "Solving rubik’s cube with a robot hand")), (Rusu et al., [2017](#bib.bib33 "Sim-to-real robot learning from pixels with progressive nets")), (Peng et al., [2018](#bib.bib34 "Sim-to-real transfer of robotic control with dynamics randomization")). Therefore it important to validate the performance and behavior of RL agents in the presence of a distribution shift. To do this, we take a step beyond summary metrics and focus on explainability techniques which help humans gain a more in-depth understanding of agent behavior. ![Our method seeks out more diverse states from which to show rollouts of agent behavior to the user.](https://media.arxiv-vanity.com/render-output/7620781/figures/teaser.png) Figure 1: Our method seeks out more diverse states from which to show rollouts of agent behavior to the user. Previous work in explainability often aims to help users understand and predict model behavior by explaining a set of model predictions (Hoffman et al., [2018](#bib.bib35 "Metrics for explainable ai: challenges and prospects")). In the reinforcement learning setting, it may be desirable to select the most informative segments that show the behavior of the agent. With this motivation, critical states methods show the agent’s behavior in states which the agent thinks are important (Huang et al., [2018](#bib.bib1 "Establishing appropriate trust via critical states")), (Amir and Amir, [2018](#bib.bib19 "Highlights: summarizing agent behavior to people")). However, even if these methods successfully explain policy behavior, we cannot conclude if the behavior generalizes when the distribution shifts in the environment. Our work builds upon the core idea of creating a set of informative trajectories of the agent’s behavior, but we focus on showing trajectories which are informative of the agent’s behavior under a test-time state distribution. To achieve this, we first define a prior over the types of states we expect to see at test time. Unfortunately, most simple priors (e.g. uniform) will also include many unreachable states which are not informative to show the user. To avoid this issue, we use an exploration policy to seek out states which match our prior distribution. By navigating to these new states rather than directly initializing our environment in these states, we reach a diverse yet still reachable set of states. To summarize our contributions, we designed counterfactual trajectories that explain behavior policies in out-of-distribution states and conducted studies with human users that show increased understanding and generalization to out-of-distribution states by measuring prediction ability. 111Our code is available at <https://github.com/juliusfrost/cfrl-rllib> ![Explanation pipeline](https://media.arxiv-vanity.com/render-output/7620781/figures/method.png) Figure 2: Explanation pipeline 2 Related Work --------------- ### 2.1 Saliency One popular class of agent interpretability approaches is saliency methods, which aim to show which features from the input cause specific agent output. This method was employed by (Greydanus et al., [2017](#bib.bib6 "Visualizing and understanding atari agents")), (Hilton et al., [2020](#bib.bib18 "Understanding rl vision")), and (Puri et al., [2020](#bib.bib14 "Explain your move: understanding agent actions using specific and relevant feature attribution")) to interpret which parts of the input that were deemed important to the agent’s decision. (Anderson et al., [2019](#bib.bib4 "Explaining reinforcement learning to mere mortals: an empirical study")) use saliency as a part of a more general agent explanation method. Even with all of the success, saliency methods have been shown to have limitations. (Atrey et al., [2020](#bib.bib13 "Exploratory not explanatory: counterfactual analysis of saliency maps for deep rl")) show that saliency does not necessarily correspond to underlying representation in RL. In addition, saliency methods are not designed to deal with agents with memory, and in general may not be informative in a multi-timestep environment. All of these challenges point to saliency as a helpful explanatory tool but not as a stand-alone solution. ### 2.2 Interpretable Representations Another approach is to use interpretable intermediate representations. In this approach, models are constructed so that even if they are not interpretable end-to-end, there will be some interpretable bottleneck or set of features used to make the final decision. (Madumal et al., [2019](#bib.bib2 "Explainable reinforcement learning through a causal lens")) use a causal model to generate explanations. (Chen et al., [2015](#bib.bib24 "Deepdriving: learning affordance for direct perception in autonomous driving")) apply this method to autonomous driving, where a model is trained to take an image and output understandable intermediate representations in addition to a final action. Other work such as (Kim et al., [2018](#bib.bib22 "Textual explanations for self-driving vehicles")) and (Jiang et al., [2019](#bib.bib23 "Language as an abstraction for hierarchical deep reinforcement learning")) use language as an informative intermediate representation or output. While successful in some applications, these kinds of interpretability methods often rely on domain knowledge to choose appropriate intermediates, and they are not applicable if a policy’s decision cannot be explained by a small set of interpretable components. ### 2.3 Critical States In the critical states framework, explanations are generated by showing especially informative “critical” states or trajectories from an agent. For example the work of (Huang et al., [2018](#bib.bib1 "Establishing appropriate trust via critical states")) considers states critical if they have a large Q-value-difference between actions or a very low entropy action distribution under a maximum entropy learning regimen. (Amir and Amir, [2018](#bib.bib19 "Highlights: summarizing agent behavior to people")) employ a similar approach, but they use videos instead of states and filter for diversity of trajectories. These methods can be seen as extensions over a baseline method of visualizing random states or failure states of an agent in an environment. The explanation format of our method - videos of the agent’s behavior in informative situations - is closest to this line of work. ### 2.4 Counterfactual Style Explanations In (Rupprecht et al., [2019](#bib.bib21 "Finding and visualizing weaknesses of deep reinforcement learning agents")), researchers build a generative model of states and try to show users the agent’s behavior in states that optimize some notion of "interesting." Again using a generative model, (Olson et al., [2019](#bib.bib3 "Counterfactual states for atari agents via generative deep learning")) use a GAN to produce modifications to states such that in the new state an agent takes a different action than it would have in the original state. While these works synthesize states directly, our method uses valid actions to access different parts of the state space. This means that any state we visit is actually reachable, at least in the training environment. The closest method to ours appears in the work of (Witty et al., [2018](#bib.bib20 "Measuring and characterizing generalization in deep reinforcement learning")), in which the authors try to characterize generalization by exploring different starting configurations, either by directly changing the start state or by using states visited by another agent. In contrast to our work, their insights are applied to characterizing generalization, and not as explanations for humans to understand agent behavior. (Anderson et al., [2019](#bib.bib4 "Explaining reinforcement learning to mere mortals: an empirical study")) provide a detailed empirical user study on the effectiveness of saliency map and reward explanations. They conclude there is no one explanation that fits all instances, but using several methods yields the best mental model, and thus encourage using multiple methods for explainability. We hope to introduce a novel explainability method which may add to this suite of tools. 3 Method --------- ### 3.1 Preliminaries In this work, we consider the challenge of generating informative trajectories which help a human user understand an agent’s policy (called the behavioral policy) πθ(at|st), which is parametrized by weights θ which can be trained through any RL or imitation learning method. The agent takes actions at in states st, resulting in a trajectory τ. ![Distribution shifts, illustrated in the MiniGrid environment. Our user study was performed in an environment similar to this, except with the bottom-right room connected to the others.](https://media.arxiv-vanity.com/render-output/7620781/figures/distribution_shift.png) Figure 3: Distribution shifts, illustrated in the MiniGrid environment. Our user study was performed in an environment similar to this, except with the bottom-right room connected to the others. Our goal is to help the user predict the agent’s behavior in a new test environment in the presence of a state distribution shift which results in different trajectory probabilities at train and test time: ptrain(τ|θ)≠ptest(τ|θ). We consider two types of environment changes which can produce this distribution shift: differences in the initial state distribution (ptrain0(s0)≠ptest0(s0) for some initial state s0) and slight differences in the dynamics (ptrain(st|st−1,at−1)≠ptest(st|st−1,at−1) for some state st and action at). We assume that there are no changes to the agent’s action space or policy at test time - i.e. πθ(at|st) remains consistent between train and test time. We also assume that dynamics distribution shifts are minor. See Section [3.4](#S3.SS4 "3.4 Distribution shifts where Our method helps ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories") for more detailed discussion on this. ### 3.2 Method Overview Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories") summarizes our method. We build upon the explanatory pipeline used in past critical states work such as (Huang et al., [2018](#bib.bib1 "Establishing appropriate trust via critical states")). An agent is first pretrained using any RL or imitation learning algorithm. We then collect many trajectory rollouts of the agent, select a set to show the user, and finally test whether these explanatory trajectories have helped the human better predict the agent’s behavior in the test environment. This is challenging because of the distribution shift between the train and test environments. Past work has attempted to select an informative set of trajectories to show the user by innovating on Step 3 in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories") - the trajectory selection process. The critical states line of work selects trajectory segments which show the agent’s behavior at states where the policy believes one particular action is much better than the others (Huang et al., [2018](#bib.bib1 "Establishing appropriate trust via critical states")). Unfortunately, methods like these which sub-select trajectories from a set of agent rollouts in the train environment are often not very informative about the agent’s behavior under a distribution shift. Consider the train/test initial state distribution shift in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Preliminaries ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). A well-trained agent which was always initialized in the top-left room during training may take a nearly-deterministic trajectory navigating toward the goal every time, so none of the trajectories visited by the agent in the training environment will ever visit the states in the top-right room which the agent will experience at test time. One way to generate trajectories in the training distribution which are more informative of the test distribution would be to manually change the starting state distribution. For instance, in the grid world shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Preliminaries ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"), the simulator could be modified to directly initialize the agent in a uniform distribution over starting cells, ensuring that some trajectories will have been seen in whatever state the agent finds itself at test time. One challenge with this approach is that it is often difficult to restrict the set of randomized initialization to reachable states the agent could possibly visit at test time. Instead, we propose creating a new distribution of start states using an exploratory policy πϕ which can navigate from the states seen on the training distribution to a new, more diverse distribution of start states pexpl0(s0|ϕ) which are guaranteed to be reachable. Implementing this only involves adding one additional Step 1.5 to the pipeline shown in Figure [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). Our trajectory generation method is summarized in the following algorithm: 1. Run the behavioral policy πθ in the train environment. Pause at a randomly selected state along the trajectory. 2. Run exploratory policy πϕ starting at this state for a fixed number of timesteps. The exploratory policy will guide the agent to a set of states not typically observed in the training data. We call these counterfactual states because we can frame it as a counterfactual question: “what would the behavioral policy do if it instead navigated to this state?” 3. Run the behavioral policy starting from the counterfactual state for the remainder of the trajectory. 4. Show the user video rollouts of the agent’s behavior starting in these counterfactual states. ![Possible types of train/test distribution shifts. Our user experiments use Shift A, although we would expect our method to be applicable to Shifts C and E as well.](https://media.arxiv-vanity.com/render-output/7620781/figures/dist_shift_compared.png) Figure 4: Possible types of train/test distribution shifts. Our user experiments use Shift A, although we would expect our method to be applicable to Shifts C and E as well. Note that the trajectories selection method used in item 4 of our algorithm can be very simple - in our experiments we randomly select trajectories. However, our method is complementary to other work on selecting informative trajectories, so future work could select trajectories more systematically, for instance filtering out near-repeated trajectories as was done in (Amir and Amir, [2018](#bib.bib19 "Highlights: summarizing agent behavior to people")) or selecting based on critical states as was done in (Huang et al., [2018](#bib.bib1 "Establishing appropriate trust via critical states")). ### 3.3 Exploration Objective Our aim is to choose an exploration objective which will guide the agent to states which produce maximally informative trajectories. Intuitively, explanatory trajectories will be more informative the closer the distribution of trajectories visited by our exploratory policy is to the test-time distribution of trajectories visited. More formally, let pexpl0(s0|ϕ) be the distribution of states in which the agent finds itself after running the exploration policy, and let pexpl(τ|ϕ,θ) be the distribution of trajectories generated by rolling out the behavioral policy πθß initialized in pexpl0(s0|ϕ). Our goal is to achieve pexpl(τ|ϕ,θ)≈ptest(τ|θ). It is easiest to choose a principled exploration objective by making the assumption that train and test-time dynamics are approximately identical. In this situation, the train-test distribution shift consists entirely of changes to the initial state: ptrain0(s0)≠ptest0(s0). However, if we are able to choose an exploration policy which seeks out states matching the test time distribution - i.e. if pexpl0(s0|ϕ)≈ptest0(s0) - then because we roll out the same behavioral policy in environments with identical dynamics starting from nearly identical start distributions, we will achieve pexpl(τ|ϕ,θ)≈ptest(τ|θ). We do not precisely know ptest0(s0) before the agent experiences the test environment, so instead we choose a prior for our test time start state distribution. In the absence of domain knowledge about what states are likely under the test-time distribution shift, we assume a uniform prior over reachable test-time states and try to match this distribution as closely as possible by using an exploratory policy which seeks out a uniform distribution of states. (Note that with this exploration objective, the only reason we begin to run πϕ along a trajectory generated by πθ rather than than running πϕ from the start is to make achieving the exploration objective easier for a learned exploration policy.) However, if the user does have domain knowledge about which states are likely at test time, the exploratory policy could be modified to preferentially seek out those states. For instance, if the user believes that test-time states will be near the training distribution, the exploration objective may instead be to seek out a uniform distribution of states within a certain radius of those seen at train time. There has been a variety of past exploration work in which agents are trained to seek out a uniform distribution of states or to learn distinct skills which take the agent to different parts of the state space such as (Sharma et al., [2019](#bib.bib27 "Dynamics-aware unsupervised discovery of skills")) and (Eysenbach et al., [2018](#bib.bib28 "Diversity is all you need: learning skills without a reward function")). While any of these off-the-shelf exploration algorithms could be used, in our work, we experiment in the Minigrid environment, where the state space and dynamics are simple enough it is possible to hard-code an oracle exploration policy which achieves a uniform coverage over reachable states. By using this oracle policy, we are able to directly test the question of whether highly diverse states serve as better explanations rather than implicitly also testing the ability of the exploration policy. ### 3.4 Distribution shifts where Our method helps There are multiple types of changes between train and test environments which can induce a difference between ptrain(τ|θ) and ptest(τ|θ), as illustrated in Figure [4](#S3.F4 "Figure 4 ‣ 3.2 Method Overview ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). In Section [3.3](#S3.SS3 "3.3 Exploration Objective ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"), we gave an intuitive justification for why we would expect our method to help in cases where train and test-time dynamics are identical, and where the exploration policy is able to cover the test-time start distribution - Shift A, in Figure [4](#S3.F4 "Figure 4 ‣ 3.2 Method Overview ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). Now, we consider our method’s usefulness in the presence of other distribution shifts. Shift B starts the agent in a new state which is unreachable in the train environment, so it is unlikely that our explanations can help there. This restriction to only reachable distribution shifts is very limiting, but we can partially relax it in cases where the distribution shift does not change the agent’s internal representation of the environment. Shift C illustrates an example of this: the test-time environment has the top-right corner wall segment removed, but so long as the agent’s internal representation of the environment does not change much based on this change, the agent will act the same way it would act if the wall was present. As a result, explanations generated in the train environment will still be informative about test-env behavior. Finally, we consider dynamics shifts. In the case of global dynamics shifts, as shown in Shift D, our method is likely unhelpful, since train-time and test-time trajectories look very different even when starting from the same state. However, if dynamics shifts only occur in a few states in the test environment as shown in Shift E, then most of the time a train-time trajectory and a test-time trajectory beginning at the same state will look identical. Therefore, we can think of the dynamics shift purely as a way to introduce a state distribution shift. While our explanations will not help the user predict these dynamics shifts, the user should be able to predict how the agent will act after the dynamics shift leads the agent to an unseen state. 4 Experiments -------------- In this section, we test our primary hypothesis that diverse states help the user understand the behavioral policy’s performance in the presence of a distribution shift. We empirically evaluate this through a Mechanical Turk user study for which we obtained IRB approval. ### 4.1 Study Design Our study consisted of a questionnaire hosted online. Our participants were Mechanical Turkers from the United States with a high Mechanical Turk approval rating on past jobs. Each questionnaire consisted of an explanation phase in which the user sees video trajectories of the policy in the train environment and an evaluation phase which measures the user’s understanding of the policy’s performance in the test environment. Train and test environments differ in which room the agent is initialized, as shown in Figure [5](#S4.F5 "Figure 5 ‣ 4.1.4 Experiment Settings ‣ 4.1 Study Design ‣ 4 Experiments ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). We tested participants on two tasks, described in Sections [4.1.2](#S4.SS1.SSS2 "4.1.2 Behavior Understanding Task ‣ 4.1 Study Design ‣ 4 Experiments ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories") and [4.1.3](#S4.SS1.SSS3 "4.1.3 Performance Evaluation Task ‣ 4.1 Study Design ‣ 4 Experiments ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). #### 4.1.1 Explanation Types For each of the tasks, our user study compares three different types of explanations. In the *Random States* setting, participants are shown 10 sample trajectories selected at random from a dataset of rollouts in an environment with the agent initialized in a particular start region. In the *Critical States* setting, participants are shown a trajectory containing each of the 10 lowest entropy states in the dataset, all with the agent initialized in the same starting region each time. If two or more of the low-entropy states are in the same trajectory, the next lowest entropy states are selected. Finally, in the *Counterfactual States* setting, participants are shown 10 trajectories of the behavioral policy beginning from a state chosen by the oracle exploration policy, which seeks out a uniform distribution over reachable states. The *Random States* setting acts as a baseline for the other two methods, because it is common practice to look at random rollouts to understand policy behavior. The *Critical States* setting was chosen as another baseline, because it is the prior work with an explanation format which most closely matches our explanations, allowing for easier comparison. Finally, the *Counterfactual States* condition is our proposed explanation method. Since the agent is initialized in a different distribution of start states at test time, we expect the *Random States* and *Critical States* methods to be uninformative at helping the user predict test-time performance. In contrast, *Counterfactual States* show more diversity and should enable to the user to predict test-time performance. #### 4.1.2 Behavior Understanding Task In this task, users first see videos of rollouts of πθ in the training environment selected using one of the three explanation methods described in section [4.1.1](#S4.SS1.SSS1 "4.1.1 Explanation Types ‣ 4.1 Study Design ‣ 4 Experiments ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories"). In this task, as users watch these explanations, they are asked to build a mental model of the specific behaviors of πθ. They are then presented with a context state in the test environment, which is identical to the train environment except that the agent is initialized in a different part of the state space where πθ performs poorly. Below the context video, the user is presented with videos of three potential continuation trajectories from the context state. Only one of the continuation trajectories is generated by πθ, and participants must select the continuation corresponding to πθ. The incorrect choices for continuation trajectories were generated with policies manually designed to have behavior which is distinctly different from πθ - one of which is always successful in the test distribution, and one which fails in a way which is visually distinct from πθ. #### 4.1.3 Performance Evaluation Task We next evaluate whether the user can predict the performance of πθ in the test environment. To do this, users are shown a context state from a test environment where the agent starts in a different state. The user must guess whether πθ will succeed or fail from that state, where success means the agent ends in the desired goal location. This task is designed to measure the participants’ understanding of the probability an agent succeeds in its task given its state. #### 4.1.4 Experiment Settings We test our method in a custom Minigrid environment (Chevalier-Boisvert et al., [2018](#bib.bib15 "Minimalistic gridworld environment for openai gym")) where there are four rooms with doors connecting between them (shown in Figure [4](#S3.F4 "Figure 4 ‣ 3.2 Method Overview ‣ 3 Method ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories")A). We train our behavioral policy using the A2C algorithm (Mnih et al., [2016](#bib.bib30 "Asynchronous methods for deep reinforcement learning")) to good performance on the train environment. To select counterfactual states, we use an oracle policy which uniformly samples a feasible state and navigates towards it. In the study, each participant sees 10 explanations and then answers 10 questions of our chosen evaluation task. For the Behavior Understanding task, the policy was trained in the top-left room, and explanations were generated in this room. In the test environment, the agent is positioned in the bottom-right room. As a result, the agent typically succeeds when initialized in the top-left (where we collect explanations) but typically fails when initialized in the top-right room (where we initialize in the evaluation phase). For the Policy Evaluation task, we test users on two different behavioral policies. The first is only able to succeed starting in the top-left room, and the other is only able to succeed from the bottom-right room. In both cases, we collect explanations in the top-left room. Users are evaluated on the agent’s behavior in all four rooms. ![Experimental setup for the counterfactual states user study. Left: users see 10 train-time rollouts such as these in the explanation phase; Right: users are asked to either select the test-time behavior which corresponds with the policy they saw previously (Task 1 - Behavior Understanding) or predict the agent’s success in a new state (Task 2 - Policy Evaluation).](https://media.arxiv-vanity.com/render-output/7620781/figures/cf_experiments.png) Figure 5: Experimental setup for the counterfactual states user study. Left: users see 10 train-time rollouts such as these in the explanation phase; Right: users are asked to either select the test-time behavior which corresponds with the policy they saw previously (Task 1 - Behavior Understanding) or predict the agent’s success in a new state (Task 2 - Policy Evaluation). | | | | | | | --- | --- | --- | --- | --- | | | Task 1 | | Task 2 | | | Condition | Accuracy | n | Accuracy | n | | Random | 0.04 | 10 | 0.6315 | 19 | | Critical | 0.1458 | 10 | 0.6411 | 17 | | Counterfactual | 0.3638 | 10 | 0.6904 | 21 | Table 2: Task 1 is the Behavior Understanding task. Task 2 is Performance Evaluation. Bold means the result is statistically greater than the rest for p<0.05 with a one-sided T-test. Table 1: Minigrid Policy Understanding and Evaluation ### 4.2 Results Qualitatively, in both tasks, the counterfactual states method produces more informative rollouts. In both tasks, neither the random states nor critical states methods produce any explanatory trajectories where the agent visits the room where the it is initialized at test time. This makes it hard for the user to predict the agent’s performance in these states. However, in the counterfactual states condition, the user sees multiple examples of the agent in the distribution of states it will see at test time. Quantitative results, however, are mixed. Table [2](#S4.T2 "Table 2 ‣ 4.1.4 Experiment Settings ‣ 4.1 Study Design ‣ 4 Experiments ‣ Explaining Reinforcement Learning Policies through Counterfactual Trajectories") shows that for the Behavior Understanding task, users are able to select the correct behavior 36.38% of the time, a statistically significant improvement over the random and critical states conditions. This performance is only marginally above random guessing (33.3%), but at least the user did not seem to be actively misled by the explanations, as occurred in the random and critical states cases. On the Policy Evaluation task, users perform similarly in all conditions, with no statistically significant difference between them. 5 Discussion ------------- While our method outperformed baselines in one of two tasks, there is significant room for improvement, as users did not consistently perform well in any condition of any task, even though the environment and agent behavior were both quite simple. Part of the difficulty in constructing good explanations derives from the population we were testing. Mechanical Turkers likely have little prior experience with reinforcement learning or time to carefully analyze agent behavior. Anecdotally, we also observed that many survey-takers come into the task with their own biases and preconceptions (for instance, assuming that a policy will be deterministic rather than stochastic, or assuming that if an agent does well in one situation it probably does well everywhere). Future work could test the usefulness of our counterfactual states method for agent developers who can spend time to familiarize themselves with the task, the explanation interface, and the types of behaviors which typically emerge in policies. In this setting, we could also test our method’s usefulness in more complex environments. The experiments we have run so far are quite limited: the environment is simple, an oracle exploration policy is available, the distribution shift only changes the agent’s starting position, and it is easy for a user to distinguish between correct and incorrect behaviors. Future work could explore whether counterfactual states hold value in more realistic settings. In this paper, we hypothesized that showing higher-diversity reachable trajectories will enable users to better understand agent behavior. We tested this hypothesis through a user study in a gridworld in which the test-time initial state distribution differed from the train-time start distribution. Our method showed improvements over less diverse baseline methods in one of two tasks, but our experiments show room for improvement and illustrate the ongoing challenge of creating explanations which improve the understanding of non-expert users. 6 Acknowledgements ------------------- Thanks to Anna Rohrbach for ongoing feedback on this work.
bd55ad3a-0c6d-4cf6-a43c-e6e9e3368546
trentmkelly/LessWrong-43k
LessWrong
Grokking “Semi-informative priors over AI timelines” Notes:  * I give visual explanations for Tom Davidson’s report, Semi-informative priors over AI timelines, and summarise the key assumptions and intuitions * The diagrams can be found here – you can click on the boxes to get linked to the part of the report that you’re interested in [1] Thanks to the Epoch team for feedback and support! Thanks especially to Jaime Sevilla and Tom Davidson for providing detailed feedback. Executive Summary The framework in Semi-informative priors over AI timelines assumes a model of AGI development which consists of a sequence of Bernoulli trials, i.e. it treats each calendar year as a “trial” at building AGI with constant probability p of succeeding. Image source: Davidson, 2021 However, we don’t know what this value of p is, so we use a generalisation of Laplace’s rule of succession to estimate P(AGI next year | no AGI yet). This is done by specifying a first-trial probability, the probability of successfully building AGI in the first year of AI research, together with the number of virtual successes, which tells us how quickly we should update our estimate for  P(AGI next year | no AGI yet) based on evidence. The framework leans very heavily on the first-trial probability, which is determined using a subjective selection of reference classes (more here). How much evidence we get depends on the number of trials that we see, which depends on the regime start-time – you can think of this as the time before which failure to develop AGI doesn’t tell us anything useful about the probability of success in later trials. For instance, we might think that 1956 (the year of the Dartmouth Conference) was the first year where people seriously started trying to build AGI, so the absence of AGI before 1956 isn’t very informative. If we think of each trial as a calendar year, then there have been 2021-1956 = 65 trials since the regime start-time, and we still haven’t developed AGI, so that’s 65 failed trials which we use to update P(AGI ne
d3e3cbc7-1114-4954-9b08-97b8976a4643
trentmkelly/LessWrong-43k
LessWrong
The Simple Truth > I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth. > > —Danielle Egan, journalist This essay is meant to restore a naive view of truth. Someone says to you: “My miracle snake oil can rid you of lung cancer in just three weeks.” You reply: “Didn’t a clinical study show this claim to be untrue?” The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?” Many people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of “truth.” There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall. Often I have seen—especially on Internet mailing lists—that amidst other conversation, someone says “X is true,” and then an argument breaks out over the use of the word “true.” This essay is not meant as an encyclopedic reference for that argument. Rather, I hope the arguers will read this essay, and then go back to whatever they were discussing before someone questioned the nature of truth. In this essay I pose questions. If you see what seems like a really obvious answer, it’s probably the answer I intend. The obvious choice isn’t always the best choice, but sometimes, by golly, it is. I don’t stop looking as soon I find an obvious answer, but if I go on looking, and the obvious-seeming answer still seems obvious, I don’t feel guilty about keeping it. Oh, sure, everyone thinks two plus two is four, everyone says two plus two is four, and in the mere mundane drudgery of everyday life everyone behaves as if two plus two is four, but what does two plus two really, ultimately equal? As near as I can figure, four. It’s still four even if I intone the question in a solemn, portentous tone of voice.
51812ec8-95b4-42ff-b47d-5a6212a3df9a
trentmkelly/LessWrong-43k
LessWrong
The Market Singularity: A New Perspective "Do you want to be rich, or do you want to be king? — The founder's dilemma. As we approach the technological singularity, the sometimes applicable trade-off between wealth (being rich) and control (being king) may extend into the realm of AI and governance. The prevailing discussions around governance often converge on the need for more coordination, effective global standards, and enforcing political regulations. The choice for a more active role of politics is sometimes framed as a choice for human control over the future, not only against the alien wills of future AIs, but also against coordination failures in general. It is a choice for human reasoning against many so-called molochian forces: prisoner's dilemmas, evolutionary races, arms races, races to the bottom, ruthless capitalism. Centralized control indeed seems to be necessary if we want to ensure that all AI is going to be altruistic, bound by universal ethical standards, or incapable of causing harm. However, I'll argue that this choice incurs important trade-offs, and may set us up for a much more brutal outcome, one in which we fail to secure human influence in the new institutions that are likely to replace existing ones. The Choice The founder's dilemma is the choice some startup founders face between retaining control of a small company, and accepting dilution for a stake in something much bigger. Both choices carry risks. By accepting loss of control, a founder may see their company going in a path they didn't initially approve. By rejecting dilution, a founder risks seeing their company getting outcompeted and eventually disappearing. Similarly, humans may face a choice between retaining control over institutions, at the risk of making them less efficient, and ceding early control in exchange for a larger stake in whatever comes next. The less efficient institutions can remain in power, but only if they can successfully repress the arrival of new ones. Repression doesn't need to last for
e7832769-c118-4c54-9c2f-63d20dd9702b
trentmkelly/LessWrong-43k
LessWrong
Wired on: "DOGE personnel with admin access to Federal Payment System" I haven't looked into this in detail, and I'm not actually sure how unique a situation this is. But, it updated me on "institutional changes to the US that might be quite bad[1]", and it seemed good if LessWrong folk did some sort of Orient Step on it. (Please generally be cautious on LessWrong talking about politics. I am interested in people commenting here who have read the LessWrong Political Prerequisites sequence. I'll be deleting or at least unhesitatingly strong downvoting comments that seem to be doing unreflective partisan dunking) ((But, that's not meant to mean "don't talk about political actions." If this is as big a deal as it sounds, I want to be able to talk about "what to do do?". But I want that talking-about-it to feel more like practically thinking through an action space, than blindly getting sucked into a political egregore)) > A 25-year-old engineer named Marko Elez, who previously worked for two Elon Musk companies, has direct access to Treasury Department systems responsible for nearly all payments made by the US government, three sources tell WIRED. > > Two of those sources say that Elez’s privileges include the ability not just to read but to write code on two of the most sensitive systems in the US government: the Payment Automation Manager and Secure Payment System at the Bureau of the Fiscal Service (BFS). Housed on a secure mainframe, these systems control, on a granular level, government payments that in their totality amount to more than a fifth of the US economy. > > Despite reporting that suggests that Musk’s so-called Department of Government Efficiency (DOGE) task force has access to these Treasury systems on a “read-only” level, sources say Elez, who has visited a Kansas City office housing BFS systems, has many administrator-level privileges. Typically, those admin privileges could give someone the power to log in to servers through secure shell access, navigate the entire file system, change user permissions, and delete o
43039f33-d4d6-4b92-84c8-52d6479ed122
trentmkelly/LessWrong-43k
LessWrong
Could we set a resolution/stopper for the upper bound of the utility function of an AI? My thought is that, instead of an AI purely trying to maximize a utility function, we give it a goal of reaching a certain utility, and then each time it reaches that goal, it shuts off and we can decide if we want to set the utility ceiling higher. Clearly, this could have some negative effects, but it could be useful if we used it in concert with other safety precautions.
c12d283d-5e51-45fb-9f0a-7280b4b18ba6
StampyAI/alignment-research-dataset/arxiv
Arxiv
Hierarchically Decoupled Imitation for Morphological Transfer. 1 Introduction --------------- How should one use Reinforcement Learning (RL) to train their new four-legged walking robot? Training a robot from scratch is often infeasible as current RL algorithms typically require millions (Schulman et al., [2017](#bib.bib80 "Proximal policy optimization algorithms"); Haarnoja et al., [2018](#bib.bib72 "Soft actor-critic algorithms and applications")) to billions (Baker et al., [2019](#bib.bib81 "Emergent tool use from multi-agent autocurricula")) of samples. Moreover, robots are expensive and slow, which further limits the applicability of learning from scratch. Because of this, tackling the challenge of high sample complexity has received significant interest and prompted a wide array of solutions ranging from off-policy learning (Lillicrap et al., [2015](#bib.bib77 "Continuous control with deep reinforcement learning")) to temporal abstractions in learning (Kulkarni et al., [2016](#bib.bib4 "Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation")). A primary reason behind RL’s sample inefficiency is the lack of prior knowledge used in training new policies. One of the biggest takeaways from advances in computer vision (CV) and natural language processing (NLP) is that priors (both architectural and parametric) are invaluable. Hence this begs the question: why should an agent learn a task from scratch? Why not use strong priors? Recent works have shown the promise of semantic priors such as auxiliary losses (Jaderberg et al., [2016](#bib.bib3 "Reinforcement learning with unsupervised auxiliary tasks")), and architectural priors such as transfer learning networks (Rusu et al., [2016](#bib.bib24 "Progressive neural networks")). However unlike passive domains such as object detection, problems in motor control and robotics offer another strong prior: morphology. Instead of learning policies from scratch on a given agent, we can use policies previously learned on morphologically different agents as a prior. For example, instead of training a quadruped walking policy to solve a maze from scratch, requiring millions of expensive samples, we could transfer-learn from a much simpler robot, say a Roomba wheeled robot. This morphological transfer affords two benefits: first, learning a policy first on a simple morphology and then transferring to a harder one induces a natural curriculum for learning (Bengio et al., [2009](#bib.bib6 "Curriculum learning")) which provides richer rewards and learning signal; second, this allows us to use fewer samples from complex robots that are often expensive and time-consuming. ![In this work, we focus on the problem of transferring long-horizon policies across morphologies. Hence, given an already performant simple agent, we imitate its behaviors on a more complex agent for transfer through hierarchical decoupling.](https://media.arxiv-vanity.com/render-output/6435841/x1.png) Figure 1: In this work, we focus on the problem of transferring long-horizon policies across morphologies. Hence, given an already performant simple agent, we imitate its behaviors on a more complex agent for transfer through hierarchical decoupling. But how do we effectively transfer policies across morphologies? Direct policy transfer is infeasible, since different morphologies have different state and action spaces (see Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Hierarchically Decoupled Imitation for Morphological Transfer") for examples of morphologies). Another option would be to use morphological latent embeddings  (Chen et al., [2018](#bib.bib31 "Hardware conditioned policies for multi-robot transfer learning"); Pathak et al., [2019](#bib.bib28 "Learning to control self-assembling morphologies: a study of generalization via modularity")), but learning robust embeddings requires training across hundreds of morphologies. Instead, inspired by recent advances in hierarchical learning (Kulkarni et al., [2016](#bib.bib4 "Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation"); Nachum et al., [2018b](#bib.bib32 "Near-optimal representation learning for hierarchical reinforcement learning")), we propose a transfer learning framework using hierarchically decoupled policies. In this framework, the low-level policy is trained specific to a given morphology, while the high-level policy can be re-used across morphologies. For compatible transfer, only the high-level policy, operating on a global agent state is transferred, while the low-level policy is independently learned. However, as recent work by Nachum et al. ([2018a](#bib.bib43 "Data-efficient hierarchical reinforcement learning")) has shown, high-level policies are intricately tied with low-level policies. Intuitively, if a low-level policy isn’t able to reach a specific sub-goal requested by the high-level policy, the high-level policy will not select that sub-goal. This brings a significant challenge to morphological transfer for agents. Since the low-levels of two agents might be significantly different due to differences in morphology that afford varying low-level capabilities, a zero-shot transfer of the high-level policy might not always be successful. To counter this, we propose a top-down alignment of the low-level policies. This is done by introducing a information theoretic alignment loss that minimizes the mutual information between morphologies and low-level behaviors. This objective is practically optimized using discriminative learning, which allows the low-level policy of a more complex agent to imitate the behavior of the simpler agent’s, making high-level transfer more successful. Using this low-level alignment significantly improves the transfer of high-level policies. However, even with better alignment, zero-shot transfer of high-level policies will not be able to fully utilize the additional benefits of a complex morphology. One way to improve on this is to finetune (Girshick et al., [2014](#bib.bib20 "Rich feature hierarchies for accurate object detection and semantic segmentation")) the high-level policies after the zero-shot transfer. But, in RL straightforward finetuning suffers from catastrophic forgetting (Rusu et al., [2016](#bib.bib24 "Progressive neural networks")) of the simpler agent’s high-level. To prevent this, we take inspiration from prior work in transfer learning and introduce a KL-regularizer that allows the complex agent to improve performance while staying close to the simpler agent’s high-level. Intuitively, this balances the imitation of the simpler agent’s high-level with its own ability to the solve the task. Doing this significantly improves performance on a suite of navigation and manipulation transfer tasks and approaches comparable performance to training from scratch with only a third the number of samples. In summary, we present the following contributions in this work: (a) we show how hierarchical policies can help morphological transfer. Although recent works in Hierarchical Reinforcement Learning  (Peng et al., [2017](#bib.bib64 "Deeploco: dynamic locomotion skills using hierarchical deep reinforcement learning"), [2019](#bib.bib5 "Mcp: learning composable hierarchical control with multiplicative compositional policies")) allude to the potential of morphological transfer, to our knowledge we are the first work that concretely focuses on this problem. (b) We propose two key technical insights for hierarchical imitation, a top-down low-level alignment and a KL-regularized high-level objective to accelerate transfer. (c) Finally, we empirically demonstrate significant improvements in performance of morphological transfer on long-horizon navigation and manipulation tasks. 2 Background ------------- ![We transfer Hierarchical policies across morphologies through decoupled imitation. For the low-level policy we use density based discriminative imitation detailed in Section ](https://media.arxiv-vanity.com/render-output/6435841/x2.png) Figure 2: We transfer Hierarchical policies across morphologies through decoupled imitation. For the low-level policy we use density based discriminative imitation detailed in Section [3.2](#S3.SS2 "3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"), which improves zero-shot high-level transfer. For the high-level policy, we use KL-regularized imitation detailed in Section [3.3](#S3.SS3 "3.3 High-level imitation through KL-regularized training ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"), which improves finetuning the high-level. Before we describe our framework, we first discuss relevant background on RL. For an in-depth survey, we refer the reader to (Sutton et al., [1998](#bib.bib78 "Introduction to reinforcement learning"); Kaelbling et al., [1996](#bib.bib70 "Reinforcement learning: a survey")). ### 2.1 Reinforcement learning In our continuous-control RL setting, an agent receives a state observation st∈S from the environment and applies an action at∈A according to policy π. In our setting, where the policy is stochastic, we hence have at∼π(st). The environment returns a reward for every action rt. The goal of the agent is to maximize expected cumulative discounted reward Es0:T,a0:T−1,r0:T−1[∑T−1t=0γtrt] for discount factor γ and horizon length T. On-policy RL  (Schulman et al., [2015](#bib.bib75 "Trust region policy optimization."); Kakade, [2002](#bib.bib73 "A natural policy gradient"); Williams, [1992](#bib.bib74 "Simple statistical gradient-following algorithms for connectionist reinforcement learning")) optimizes π by iterating between data collection and policy updates. It hence requires new on-policy data every iteration, which is expensive to obtain. On the other hand, off-policy reinforcement learning retains past experiences in a replay buffer and is able to re-use past samples. Thus, in practice, off-policy algorithms have been found to achieve better sample efficiency (Lillicrap et al., [2015](#bib.bib77 "Continuous control with deep reinforcement learning")). For our experiments we use the off-policy SAC (Haarnoja et al., [2018](#bib.bib72 "Soft actor-critic algorithms and applications")) algorithm as our base RL optimizer due to its sample efficiency. However, we note that our framework is compatible with any standard RL algorithm. ### 2.2 Two-layered hierarchical RL (HRL) The central idea of HRL is to abstract the policy π into multiple policies that operate at temporally different levels. The most common abstraction is a two-level hierarchy (Nachum et al., [2018a](#bib.bib43 "Data-efficient hierarchical reinforcement learning")), a low-level policy πlo and a high-level policy πhi. In our formulation, the high-level takes as input a part of the state observation shit and outputs morphology-independent high-level actions ahit that serve as subgoals for the low-level policy. These subgoals are morphology-independent and belong to a goal space G. The low-level takes as input a part of the state observation slot and the commanded subgoal ahit, and outputs low-level control actions alot to try and reach the subgoal. Note that we split the state observation st into two components, shit a global morphology-independent state and slot a proprioceptive morphology dependent state. This particular choice of state, inspired from Marino et al. ([2018](#bib.bib79 "Hierarchical rl using an ensemble of proprioceptive periodic policies")), allows us to transfer πhi across morphologies that have different state representations. 3 Method --------- In this work, we focus on the problem of transferring policies from one morphology to another for challenging long-horizon problems. Concretely, we define morphology transfer from morphology A to morphology B on a specific task as transferring a policy trained with A to B, i.e., given access to a trained πA, what is the fastest way to train πB (See Figure [2](#S2.F2 "Figure 2 ‣ 2 Background ‣ Hierarchically Decoupled Imitation for Morphological Transfer")). Practically, A is a simple agent for which training πA might be a lot easier (and safer) than training on a more complex agent πB. However, since A and B have different state and action spaces afforded by their morphologies, we build on top of the hierarchical setup described in Section [2.2](#S2.SS2 "2.2 Two-layered hierarchical RL (HRL) ‣ 2 Background ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). The hierarchical demarcation between low-level policies that act on proprioceptive states and high-level policies that act on global states provides a natural way to transfer high-level knowledge. Moreover, hierarchical learning is empirically known to provide massive improvements in sample-efficiency (Nachum et al., [2018a](#bib.bib43 "Data-efficient hierarchical reinforcement learning")). In the following subsections we detail our proposed technique. ### 3.1 Zero-shot high-level transfer In the hierarchical setting, morphology transfer reduces to transferring πA≡[πloA,πhiA,] trained on A to πB≡[πloB,πhiB,]. One straightforward way to perform transfer is to set πhiA→πhiB as the input shi and output ahi of the high-level policies are morphology-independent. The low-level policy πloB can either be learned with the fixed high-level πhiB or trained independently. In our experiments, we pre-train the low-level policy πloB on uniformly sampled goals from G, allowing us to learn an effective πloB without access to πA and generalize over tasks without re-training πloB. ### 3.2 Low-level imitation through behavior alignment Although directly transferring the high-level policy πhiA→πhiB with independently trained low-levels πloB allows for zero-shot morphology transfer, it suffers from low-level domain shift especially for tasks requiring precise control. This is because different morphologies afford different low-level behavior. Intuitively, if πloB doesn’t generate similar behavior to πloA, transferring πhiA→πhiB may not work since πloB does not generate the behavior expected by πhiB. To solve this problem, we align the low-level πloB to πloA on the set of goals G. Note that direct cloning is not possible since the low-level state and action spaces are not the same. Additionally, simply ensuring both agents can reach the same portions of G used by πhiA is insufficient for strong alignment since goal-reaching behavior can differ while achieving the same goal (illustrated as low-level behavior in Figure [2](#S2.F2 "Figure 2 ‣ 2 Background ‣ Hierarchically Decoupled Imitation for Morphological Transfer")). To incentivize πloB (parameterized by θloB) to mimic the behavior of πloA, we propose minimizing the following mutual information objective: | | | | | | --- | --- | --- | --- | | | minθloBI(morphology;behavior) | | (1) | Minimizing the mutual information between the morphology type (M={A,B}) and the generated low-level behavior will result in a low-level policy πloB whose behavior cannot be distinguished from πloA. To ensure that the behavior is compatible with both morphologies A and B, we set behavior at time t as τt=shit:t+k, where k is the horizon of behavior. Let Tt denote the distribution over behaviors τt. As I(M;Tt)=H(M)−H(M|Tt) and H(M) cannot be controlled through πloB, our objective from equation [1](#S3.E1 "(1) ‣ 3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer") reduces to maximizing H(M|Tt). Here H(.) denotes entropy. Since the probability of agent selection p(M=m|Bt=bt) cannot be readily estimated, we compute the variational lower bound, which reduces our objective to: | | | | | | --- | --- | --- | --- | | | maxθloBEm∼M,τt∼Tt[−logqϕ(m|τt)] | | (2) | Here q parameterized by ϕ is effectively a binary classifier (or discriminator) that outputs the probability of the generated behavior coming from morphology m given input behavior τt. Complete derivations of equation [2](#S3.E2 "(2) ‣ 3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer") can be found in the Appendix. Maximizing this objective through θloB implies generating behaviors that maximally confuse the discriminator. To do this we augment the low-level policy rewards as follows: | | | | | | --- | --- | --- | --- | | | rloB←R(τBt|g)−λλ10logqϕ(M=B|τBt) | | (3) | Here λ0 and λ1 represent temperature parameters that anneals the rewards from the discriminator over time, τBt represents the trajectory generated by πloB while trying to solve the subgoal g, and R represents the sub-goal reward function. The training process is summarized in Algorithm [1](#alg1 "Algorithm 1 ‣ 3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). This procedure of fitting a discriminator and then re-optimizing the policy has a similar flavor to recent work in approximate inverse reinforcement learning (Ho and Ermon, [2016](#bib.bib61 "Generative adversarial imitation learning")), where the discriminator represents the reward density function.   Input: New agent B, agent A’s behavior TA, and a goal distribution G.   Initialize: Learnable parameters θ for πloB,θ and ϕ for qϕ   for i=1,2,..Niter do      sample goal g∼G      for j=1,2,..T do         sample action alot∼πloB,θ         collect experience (slot,alot,slot+1,rlot) for TB      end for      for j=1,2,…Mpolicy do         update rlot according to eq [3](#S3.E3 "(3) ‣ 3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer")         θ←policyOptimizer({(slot,alot,slot+1,rlot)},θ)      end for      for j=1,2,…Mdiscrim do         ϕ←discrimOptimizer(TA,TB,ϕ)      end for   end for   Return: θ Algorithm 1 Low-level Alignment for high-level transfer ### 3.3 High-level imitation through KL-regularized training In the previous section we discussed aligning the low-levels through density based imitation, which allows for better zero-shot transfer of the high-levels πhiA→πhiB. However, even with low-level alignment, if the morphologies afford different abilities, direct transfer of the high-level may not reach optimal behavior. One way to remedy this is to finetune the high-level πhiB after transferring from πhiA, which already encodes knowledge required to solve the task. However, direct finetuning with RL often leads to catastrophic forgetting of the former policy πhiA as previously noted in  Rajeswaran et al. ([2017](#bib.bib58 "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations")); Rusu et al. ([2016](#bib.bib24 "Progressive neural networks")). To alleviate this, we propose a KL-regularized finetuning that balances staying close to πhiA and exploiting task driven reward signals. | | | | | | --- | --- | --- | --- | | | gradhiB←gradRL+α∇θhiBKL(πhiB(ahiB|shiB)||πhiA(ahiA|shiB)) | | (4) | Here, under the high-level trajectory τhi≡(shi,ahi) generated by πhiB, the gradients for the parameters θhiB of πhiB is represented by gradhiB. This has two terms, the first term gradRL represents the gradients through the base RL optimizer. The second terms represents the behavior cloning gradient of imitating πhiA, with ahiA∼πhiA. This additional imitation objective is a practical form of regularization that penalizes deviations from sub-goal sequences set by the existing high-level πhiA. Additionally, the imitation loss forces the policy to remain near the portion of the state space in which πhiA was maximally performant, preventing the aforementioned phenomena of catastrophic forgetting. ![Visualization of all agents and tasks used in our environment suite.](https://media.arxiv-vanity.com/render-output/6435841/x3.png) Figure 3: Visualization of all agents and tasks used in our environment suite. 4 Experiments -------------- In this section we discuss empirical results on using hierarchically decoupled imitation for transferring policies across morphologies. Note that since there are no standard benchmarks for evaluating morphological transfer, we create an environment suite that will be publicly released. Using this experimental setup, we seek to answer the following questions. First, does hierarchical decoupling provide an effective natural framework for morphology transfer? Second, does discriminative imitation improve zero-shot performance of morphology transfer? Finally, does KL-regularized finetuning improve the performance of transferred policies over baseline methods? ### 4.1 Experimental setup To study morphological transfer for long-horizon tasks, we present a suite of environments with eight agents and four tasks for manipulation and navigation as illustrated in Figure [3](#S3.F3 "Figure 3 ‣ 3.3 High-level imitation through KL-regularized training ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). For navigation, we use four agents: PointMass, Ant (Duan et al., [2016](#bib.bib84 "Benchmarking deep reinforcement learning for continuous control")), 3-Legged Ant, Quadruped (Tassa et al., [2018](#bib.bib83 "Deepmind control suite")), and two tasks: Waypoint navigation and Maze navigation. However, unlike in Nachum et al. ([2018a](#bib.bib43 "Data-efficient hierarchical reinforcement learning")), our maze navigation task has a completely sparse reward, making the task much more difficult. All morphologies have vastly different action spaces. The simplest agent, the PointMass, has an action space of two dimensions while the most complex agent, the quadruped, has an action space of twelve dimensions. Across navigation tasks, the goal space is set to the (x,y) of the agent’s torso. For the manipulation environments, we use four agents with varying degrees of freedom: PointMass, 2Link-Arm, 3Link-Arm, and 4Link-Arm adapted from the OpenAI Gym Reacher (Brockman et al., [2016](#bib.bib85 "Openai gym")) and two tasks: BlockPush (with variants based on goal location) and PegInsert. The goal space G for the manipulation tasks is the (x,y) position of the end-effector. All of these environments are simulated on MuJoCo (Todorov et al., [2012](#bib.bib86 "Mujoco: a physics engine for model-based control")) using the OpenAI Gym interface. Images of the environments can be found in Figure [3](#S3.F3 "Figure 3 ‣ 3.3 High-level imitation through KL-regularized training ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). More details of the environments and the agents are provided in the Appendix. ### 4.2 Training details We use SAC as our base RL optimizer, and re-purpose the open-source Stable Baselines (Hill et al., [2018](#bib.bib82 "Stable baselines")) code-base for our methods. We first train low-level policies by sampling uniformly from the goal space of each task G. In order to make our hierarchical framework applicable to any task or environment, we use reward functions for the low-level policies that are highly generalizable. For the navigation tasks, low-level reward is given by the weighted cosine between the vector of agent movement and vector to the goal g. In all manipulation tasks, we use L2 distance between the end-effector and goal g with a sparse reward for being within ϵ of g. Across all tasks of the same type, we only train low-level policies for each agent once; i.e. the same low-level policies used for way-point navigation are used for the maze task. We ran all experiments across five random seeds, except for the high-level and high-level finetunings of the selected navigation tasks, where we ran ten random seeds. When training policies for low-level imitation we collect data from the existing agent A in an off-policy manner. For every step agent B, we add the transition (shit,shit+1) from agent B along with a transition for agent A sampled from TA to a circular buffer, maintaining class balance. During training, we only update the discriminator periodically by randomly sampling data from the buffer. In addition to annealing the weight of discriminator rewards as per equation [3](#S3.E3 "(3) ‣ 3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"), we also anneal the learning rate of the discriminator to zero, preventing over-fitting and allowing agent B to train against an increasingly stationary target. For KL-regularized finetuning, we directly incorporate a term for the KL-divergence between πhiA and πBB into the policy optimizer loss. This is easily calculated as all of our policies are parameterized by diagonal gaussians. Just as with the discriminator, the KL-loss coefficient is annealed to zero during training as it is no longer needed once the transferred policy is stable. Additional training details including hyper-parameters set are included in the Appendix. ### 4.3 How well does zero-shot hierarchical transfer work? | Waypt | PM HL | Ant HL | Ant3 HL | Block Push | PM HL | 3Link HL | 4Link HL | Insert | PM HL | 3Link HL | 4Link HL | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | PM LL | 1e3±43 | 603±34 | 716±58 | PM Low | .99±.01 | .33±.18 | .26±.17 | PM Low | 1±0 | .6±.22 | .6±.22 | | Ant LL | 483±39 | 476±19 | 473±51 | 3Link LL | .17±.08 | .96±.02 | .49±.14 | 3Link LL | 1±0 | 1.±0.0 | .8±.18 | | Ant3 LL | 489±74 | 484±72 | 499±65 | 4Link LL | .24±.11 | .1±.05 | .89±.04 | 4Link LL | 1±0 | .8±.18 | .9±.06 | Table 1: Selected Zero-Shot performance results averaged across a hundred episodes per run. For all tasks except way-point navigation, values reported indicate the fraction of successes. We perform straightforward high-level transfer described in Section [3.1](#S3.SS1 "3.1 Zero-shot high-level transfer ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer") across our task-morphology environment suite. In Table [1](#S4.T1 "Table 1 ‣ 4.3 How well does zero-shot hierarchical transfer work? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer"), we present a snapshot of the results when combining the high-level of an agent (column-wise) with a specific agent (row-wise). On relatively easy tasks like Waypoint, zero-shot transfer works well. For instance, on the Ant morphology, using a high-level learned on PointMass does not result in any performance degradation. Interestingly, the nearly ideal PointMass experiences a significant performance degradation when using a different high-level policy, indicating that high-level policies are indeed overfit to the morphology they are trained on. This is manifested further in the block push task where zero-shot performance deteriorates significantly. For example, when transferring the high-level form the PointMass to the 4Link Arm, performance drops by around 74%. The poor high-level transfer on harder environments and morphologies motivates the need for better transfer algorithms. Detailed experiments with all morphology transfers is presented in the Appendix due to constraints of space. ### 4.4 Does discriminative imitation improve transfer? | Task | Waypoint Nav | Maze | Block Push 1 | Block Push 2 | | --- | --- | --- | --- | --- | | Transfer | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 41Ant | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 41Ant | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 413Link | 2Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 41 4Link | 3Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 413Link | 2Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | 3Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | | Zero-Shot | 482.72±38.96 | 0.3±.08 | 0.17±.08 | 0.07±.04 | 0.1±.05 | 0.24±.12 | 0.19±.12 | 0.39±.16 | | Discrim (ours) | 546.06±14.76 | 0.55±.12 | .34±0.10 | .17±.08 | 0.23±.13 | 0.43±.12 | 0.46±.30 | 0.44±.14 | Table 2: Selected discriminator results. We train low-level policies with a discriminator for the transferred-to agents, then assess zero-shot performance across a hundred episodes for each run. We find that the same low-level aligned policy improves transfer across both tasks, commonly leading to a doubling of performance. To improve zero shot high-level transfer, we perform discriminative imitation on the low-level policies as described in Section [3.2](#S3.SS2 "3.2 Low-level imitation through behavior alignment ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). Results are presented in table [2](#S4.T2 "Table 2 ‣ 4.4 Does discriminative imitation improve transfer? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer") across a wide set of poorly performing plain high-level transfer. We measure the effectiveness of the discriminator by comparing zero-shot performance of the plain low-level and the low-level trained with a discriminator for a given high-level. Across nearly all tasks, discriminative imitation of the low-level improves zero-shot performance, especially in manipulation tasks. For example, transferring the PointMass high-level to the 3Link Arm is twice as successful across both block push tasks using our method. A substantial benefit of discriminative imitation is that the aligned low-level policy only needs to be trained once for transfer across any number of high-level policies, making it particularly useful when training the high-level policy is expensive or an increase in transfer performance is desired across a large set of tasks. | | | | --- | --- | | Learning curves for low-level policies with and without the discriminator. For the Ant Agent (left) the discriminator policy achieves slightly lower though comparable performance to the regular low-level, while the 3 and 4 Link arms (Right) learn slightly faster with discriminator supervision. | Learning curves for low-level policies with and without the discriminator. For the Ant Agent (left) the discriminator policy achieves slightly lower though comparable performance to the regular low-level, while the 3 and 4 Link arms (Right) learn slightly faster with discriminator supervision. | Figure 4: Learning curves for low-level policies with and without the discriminator. For the Ant Agent (left) the discriminator policy achieves slightly lower though comparable performance to the regular low-level, while the 3 and 4 Link arms (Right) learn slightly faster with discriminator supervision. To verify that our agents are indeed learning to discriminate rather than just learning objectively better policies, we examine the low-level learning curves in Figure [4](#S4.F4 "Figure 4 ‣ 4.4 Does discriminative imitation improve transfer? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). Though for the arm agents the discriminator provides an extra form of supervision that speeds up learning, final rewards for all agents with the discriminator are at or below those of without, indicating that the performance benefits in Table [2](#S4.T2 "Table 2 ‣ 4.4 Does discriminative imitation improve transfer? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer") can indeed be attributed to imitation. ### 4.5 Does finetuning improve transfer? After transferring the high-level policy from a simpler agent to a more complex one, we finetune it by retraining to improve performance. Results for this are visualized in Figure [6](#S4.F6 "Figure 6 ‣ 4.6 How much does KL-regularized training help? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer") and Figure [7](#S4.F7 "Figure 7 ‣ 4.6 How much does KL-regularized training help? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). Finetuning clearly works better than learning the high-level from scratch (in green) and reaches substantially higher performance compared to the zero-shot high-level transfer (dotted purple line). However, in some cases regular finetuning suffers from unstable training. For instance in the transfer finetuning of 2Link Arm from PointMass on BlockPush, we see poor confidence bounds. Empirically, we notice massive fluctuations in training performance across seeds that cause this behavior, most likely explained by high-level policies shifting far out of distribution during morphology transfer, causing catastrophic forgetting of task-solving knowledge. This motivates the need for better transfer learning methods for finetuning. ### 4.6 How much does KL-regularized training help? To improve performance during finetuning, we use KL-regularized imitation as described in Section [3.3](#S3.SS3 "3.3 High-level imitation through KL-regularized training ‣ 3 Method ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). Empirically, we find that the addition of an imitation loss during high-level transfer substantially improves performance as seen in Figure [6](#S4.F6 "Figure 6 ‣ 4.6 How much does KL-regularized training help? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer") and Figure [6](#S4.F6 "Figure 6 ‣ 4.6 How much does KL-regularized training help? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). On the BlockPush we notice at least a 2X speedup compared to already strong direct finetuning method. On the Maze environments, we again notice improvements in performance; however the gains are modest compared to the BlockPush environment. In all cases, the variance across runs is no worse than regular finetuning, and can drastically improve in select cases as seen in the 2 Link and 3 Link experiments in Figure [7](#S4.F7 "Figure 7 ‣ 4.6 How much does KL-regularized training help? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). We additionally compare our KL-regularized finetuning method to training the high-level policy from scratch with a pre-trained low-level and learning the task without hierarchy, denoted “Full”. Note that we make our comparisons with respect to the number of samples used in training. Learning without hierarchy was unsuccessful for the complex agents on navigation tasks, particularly the sparse-reward maze. Though it is cut off in the graphs, learning without hierarchy was successful for all the manipulation tasks. ![Trajectories for the ](https://media.arxiv-vanity.com/render-output/6435841/result_figs/steps.png) Figure 5: Trajectories for the Ant agent on a “steps” environment for training the high-level from scratch, finetuning from Point Mass, and zero-shot transfer from Point Mass from left to right respectively. The dark blue (taller) and light blue (shorter) steps are reduced in height to allow the ant agent to climb over. ![Comparison of performance finetuning from ](https://media.arxiv-vanity.com/render-output/6435841/x4.png) Figure 6: Comparison of performance finetuning from PointMass for Ant Waypoint, 3-Legged Ant, Ant End-Maze, and Sampled Ant-Maze from left to right. During regular training of the maze task, we sample goals uniformly from the maze segments, however, we also assess the performance of transfer when only attempting to reach the end of the maze in the second plot from the right. Rewards below 200 are clipped. For Waypoint, the agent receives a reward of 100 per waypoint reached and for Maze, the agent receives a reward of 1000 for reaching the set goal. “Full” denotes that the policy was trained without hierarchy. ![Visualizaiton of finetuning performance for ](https://media.arxiv-vanity.com/render-output/6435841/x5.png) Figure 7: Visualizaiton of finetuning performance for 2 Link, 3 Link, and 4 Link from left to right on the block push 1 task. Rewards below -10 are clipped. The agent receives a reward of 200 for completing the task successfully. ### 4.7 In what cases does hierarchical transfer fail? The aforementioned results demonstrate the significant promise of hierarchically decoupled transfer in scenarios where all agents can similarly cover the task state space. However, what happens if a more complex agent’s morphology endows it with additional abilities? In this specific context, hierarchical decoupling may not lead to a perfectly optimal transfer as the more complex agent may be able to reach new states other agents could not. To examine such scenarios, we design a simple “step maze” task, with low barriers that an Ant could scale but a Point Mass could not. We then examine the trajectories of the Ant when its high-level is trained from scratch, finetuned form Point Mass, and zero-shot transferred from Point Mass in Figure [5](#S4.F5 "Figure 5 ‣ 4.6 How much does KL-regularized training help? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). The Ant high-level trained from scratch learns to climb over the highest barrier while the zero-shot trajectory mostly avoids the barriers just as the Point Mass would. By finetuning, the Ant agent moves closer to the policy learned from scratch climbing the lower barrier, but remains somewhat tied to its prior and less consistently climbs the high barrier. 5 Related Work --------------- Our work is inspired by and builds on top of a broad range of topics, including transfer learning, morphological transfer, imitation learning, and information theoretic RL. In this section, we overview the most relevant ones. ### 5.1 Multi-task and transfer learning Learning models that can share information across tasks has been concretely studied in the context of multi-task learning  (Caruana, [1997](#bib.bib1 "Multitask learning")), where models for multiple tasks are simultaneously learned. More recently, Kokkinos ([2017](#bib.bib2 "Ubernet: training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory")); Doersch and Zisserman ([2017](#bib.bib7 "Multi-task self-supervised visual learning")) looks at shared learning across visual tasks, Pinto and Gupta ([2017](#bib.bib9 "Learning to push by grasping: using multiple tasks for effective learning")) looks at shared learning across robotic tasks, and Pathak et al. ([2019](#bib.bib28 "Learning to control self-assembling morphologies: a study of generalization via modularity")) looks at message passing for low-level control of articulated agents. In contrast to multi-task learning where knowledge is simultaneously learned, we focus on the disparate setting in which knowledge from one task (a simpler agent) is transferred to another (a more complex agent). More relevant to our work, Chen et al. ([2018](#bib.bib31 "Hardware conditioned policies for multi-robot transfer learning")) focuses on transfer across different hardware by training hardware conditioned embeddings across a large number of morphologies. However, access to more than two or three morphologies is unrealistic in practice. Our method is best suited for these scenarios. Transfer learning (Pan and Yang, [2009](#bib.bib10 "A survey on transfer learning"); Torrey and Shavlik, [2010](#bib.bib11 "Transfer learning")) focuses on transferring knowledge from one domain to another. One of the simplest forms of transfer is finetuning (Girshick et al., [2014](#bib.bib20 "Rich feature hierarchies for accurate object detection and semantic segmentation")), where models are initialized on different tasks. Several other works look at more complex forms of transfer (Yang et al., [2007](#bib.bib13 "Adapting svm classifiers to data with shifted distributions"); Hoffman et al., [2014](#bib.bib12 "Continuous manifold based adaptation for evolving visual domains"); Aytar and Zisserman, [2011](#bib.bib14 "Tabula rasa: model transfer for object category detection"); Saenko et al., [2010](#bib.bib15 "Adapting visual category models to new domains"); Kulis et al., [2011](#bib.bib16 "What you saw is not what you get: domain adaptation using asymmetric kernel transforms"); Fernando et al., [2013](#bib.bib17 "Unsupervised visual domain adaptation using subspace alignment"); Gopalan et al., [2011](#bib.bib18 "Domain adaptation for object recognition: an unsupervised approach"); Jhuo et al., [2012](#bib.bib19 "Robust visual domain adaptation with low-rank reconstruction")). The most relevant to our work is Tzeng et al. ([2017](#bib.bib21 "Adversarial discriminative domain adaptation")), where a discriminator is used to align intermediate features across domains. Similar in spirit, in our proposed low-level alignment through density imitation, we use a discriminator to align the behavior of low-level policies across morphologies. In the context of RL, transfer learning (Taylor and Stone, [2009](#bib.bib22 "Transfer learning for reinforcement learning domains: a survey")) research has focused on learning transferable features across tasks (Parisotto et al., [2015](#bib.bib23 "Actor-mimic: deep multitask and transfer reinforcement learning"); Barreto et al., [2017](#bib.bib25 "Successor features for transfer in reinforcement learning"); Omidshafiei et al., [2017](#bib.bib26 "Deep decentralized multi-task multi-agent reinforcement learning under partial observability")). Another line of work by (Rusu et al., [2016](#bib.bib24 "Progressive neural networks"); Kansky et al., [2017](#bib.bib30 "Schema networks: zero-shot transfer with a generative causal model of intuitive physics"); Devin et al., [2017](#bib.bib27 "Learning modular neural network policies for multi-task and multi-robot transfer")) has focused on network architectures that improves transfer of RL policies. Since the focus of our work is on transfer across morphologies, we note that the aforementioned works are orthogonal and complementary to ours. ### 5.2 Hierarchical Reinforcement Learning HRL based techniques (Barto and Mahadevan, [2003](#bib.bib35 "Recent advances in hierarchical reinforcement learning"); Bacon et al., [2017](#bib.bib36 "The option-critic architecture"); Li et al., [2019a](#bib.bib87 "Sub-policy adaptation for hierarchical reinforcement learning")) have been able to solve complex or long-horizon tasks through temporal abstraction across hierarchies as seen in Levy et al. ([2017](#bib.bib48 "Learning multi-level hierarchies with hindsight")) and Nachum et al. ([2018a](#bib.bib43 "Data-efficient hierarchical reinforcement learning")). In a similar vein, several works have focused on the discovery of primitives (Eysenbach et al., [2018](#bib.bib39 "Diversity is all you need: learning skills without a reward function"); Sharma et al., [2019](#bib.bib38 "Dynamics-aware unsupervised discovery of skills"); Shankar et al., [2020](#bib.bib33 "Discovering motor programs by recomposing demonstrations")), which are useful for hierarchical RL. These ideas have already been combined, as in Stochastic Neural Networks by Florensa et al. ([2017](#bib.bib45 "Stochastic neural networks for hierarchical reinforcement learning")), where skills are learned in pretraining and then used to solve diverse complex tasks. Similarly, Andreas et al. ([2016](#bib.bib46 "Modular multitask reinforcement learning with policy sketches")) learn modular sub-policies to solve a temporally extended task. These prior works inform our choice to use HRL as the backbone of our framework. Moreover, hierarchical decoupling allows for a natural delineation of control (Wolpert and Kawato, [1998](#bib.bib34 "Multiple paired forward and inverse models for motor control")). However, we note that unlike standard hierarchical RL techniques, we train our low-level policies independent of the high-level policies. Although a few recent works have alluded to the potential of hierarchical policies for morphology transfer (Peng et al., [2017](#bib.bib64 "Deeploco: dynamic locomotion skills using hierarchical deep reinforcement learning"); Tirumala et al., [2019](#bib.bib65 "Exploiting hierarchy for learning and transfer in kl-regularized rl"); Li et al., [2019b](#bib.bib66 "Hierarchical reinforcement learning with advantage-based auxiliary rewards"); Hu and Montana, [2019](#bib.bib67 "Skill transfer in deep reinforcement learning under morphological heterogeneity")), to our knowledge we are the first to focus on this problem. ### 5.3 Imitation learning The central contribution of this work is decoupled imitation learning for the low-level and high-level policies in a hierarchical setup. In the context of control, the field of learning from demonstrations (LfD) (Nicolescu and Mataric, [2003](#bib.bib49 "Natural methods for robot task learning: instructive demonstrations, generalization and practice"); Argall et al., [2009](#bib.bib56 "A survey of robot learning from demonstration")) learns to reproduce a set of demonstrated expert behavior. A popular technique called behavior cloning (Esmaili et al., [1995](#bib.bib50 "Behavioural cloning in control of a dynamic system")) focuses on fitting parametric models to expert demonstrations (Kober and Peters, [2009](#bib.bib51 "Learning motor primitives for robotics"); Peters et al., [2013](#bib.bib52 "Towards robot skill learning: from simple skills to table tennis")). Several works (Niekum et al., [2012](#bib.bib53 "Learning and generalization of complex tasks from unstructured demonstrations"); Krishnan et al., [2018](#bib.bib54 "Transition state clustering: unsupervised surgical trajectory segmentation for robot learning"); Murali et al., [2016](#bib.bib55 "Tsc-dl: unsupervised trajectory segmentation of multi-modal surgical demonstrations with deep learning"); Meier et al., [2011](#bib.bib68 "Movement segmentation using a primitive library")) focus on segmenting demonstrations followed by fitting models to each of the segments. More recently, Rajeswaran et al. ([2017](#bib.bib58 "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations")) focus on regularizing the behavior cloning objective for dexterous hand manipulation. Inspired by this idea, we use a KL-regularized objective in the context of regularizing the high-level policy. Instead of imitation through cloning, inverse RL (Ng et al., [2000](#bib.bib60 "Algorithms for inverse reinforcement learning."); Abbeel and Ng, [2004](#bib.bib59 "Apprenticeship learning via inverse reinforcement learning")) focuses on recovering the underlying reward function from expert demonstrations. Ho and Ermon ([2016](#bib.bib61 "Generative adversarial imitation learning")) extends inverse RL to higher dimensional state-action demonstrations by learning a parametric expert density model through discriminative learning between expert demonstration and learned behavior. Following this technique, several works have extended this idea to third person demonstrations (Stadie et al., [2017](#bib.bib62 "Third-person imitation learning")) and stochastic demonstrations (Li et al., [2017](#bib.bib63 "Infogail: interpretable imitation learning from visual demonstrations")). Solving the information theoretic formulation for low-level state alignment reduces to a similar discriminative learning approach. However, instead of differentiating between expert demonstrations and learned behavior, our discriminator differentiates between the simpler agent’s low-level behavior and the more complex agent’s low-level. 6 Conclusions -------------- In this work, we have presented one of the first steps towards morphology transfer by using hierarchically decoupled imitation. This technique allows transferring complex long horizon behavior from morphologically simple agents to more complex ones in a fraction of sample complexity compared to standard RL techniques. Although this work focuses on simulated environments, we believe that this opens the door to research in morphological transfer on real robots. Moreover, although our technique for decoupled imitation is presented in the context of morphological transfer, we believe that the technique is flexible enough to be applied to general purpose imitation learning. ### Acknowledgements We thank AWS for computing resources. We also gratefully acknowledge the support from Berkeley DeepDrive, NSF, and the ONR Pecase award Appendix -------- ### A. Additional Derivations Below is the full derivation of the objective used to motivate low-level discriminative imitation, taking inspiration from other work based on information theoretic objectives (Eysenbach et al., [2018](#bib.bib39 "Diversity is all you need: learning skills without a reward function")). We start by minimizing the mutual information between morphology, M and behavior, Tt. I denotes mutual information and H denotes entropy. | | | | | --- | --- | --- | | | minθloBI(M;Tt)=maxθloB−I(M;Tt) | | | | =maxθloB−(H(M)−H(M|Tt)) | | | | =maxθloBH(M|Tt)−H(M) | | | | =maxθloBH(M|Tt) | | | | =maxθloBE[−logp(M|Tt)] | | | | ≥maxθloBE[−logqϕ(M|Tt)] | | As per the above derivation we can encourage similar behavior across agents by maximizing the entropy of the morphology given a behavior. In the fourth step we assume the distribution over morphologies is uniform, and subsequently the second term is a constant that can be omitted from the optimization. The final step applies the variational lower bound (Barber and Agakov, [2004](#bib.bib88 "Information maximization in noisy channels: a variational approach")). ### B. Environment Details Below are more complete specifications of the environments used in experiments. #### Navigation For all navigation agents, the low-level reward is given by the weighted distance traveled towards the goal, with an action penalty term. | | | | | --- | --- | --- | | | rlo=(shit−shit−1)⋅(gt−shit−1)||gt−shit−1||2−λ||at||22 | | High-level actions are taken once every 32 steps, except on the quadruped agent where it is performed every 64. The high-level goal space is defined to be the desired change in x,y position of the agent’s center, limited by a distance of four meters in either direction. Agents: All agents observe joint positions (qpos), velocities (qvel), and the vector to the next sub-goal. All agents besides the point mass additionally observe contact forces. All agents use torque control. * Point Mass: A point mass agent whose actions are forces in the cardinal directions. * Gym Ant: This is the Open AI Gym Ant agent with its gear reduced from 150 to 125. Note that this is less modification than the Ant agent in HiRO (Nachum et al., [2018a](#bib.bib43 "Data-efficient hierarchical reinforcement learning")). * 3 Leg Ant: This agent is identical to the regular ant, expect one of its legs is frozen in place. * 2 Leg Ant: Again identical to the Ant, expect two diagonally opposed legs are frozen in place. * DM Control Quadruped: The quadruped agent is similar to the Gym Ant, expect it has an extra ankle joint on each of it’s legs, making controlling it different. We do not use the same control scheme as in DM Control, and instead give it the same observations as the Ant agent. Tasks: * Way Point Navigation: The agent is tasked with navigating through a plane and reaching specific waypoints. As soon as the agent reaches one waypoint, another waypoint is randomly placed. The agent receives a reward for its L2 distance from the waypoint, and a sparse reward of 100 upon reaching the waypoint. The observation space is given by the agent’s current position and the position of the waypoint. The high-level policy is trained with a horizon of 50 high-level steps. * Maze: The agent must navigate through a ’U’ shaped maze and reach the end. The agent only receives a sparse reward of 1000 upon reaching its final goal. During training, final goal locations are randomly sampled uniformly from the six ”blocks” of the maze path, while in evaluation the final goal is always the end of the maze. The observation space is given by just the agent’s current position and the position of the final goal. * Steps: The agent has to navigate through a similarly shaped structure to that of the Maze, although only half the size. The height of the taller step is 0.3125 meters, while the height of the shorter step is 0.15625 meters. When the Ant agent is used for the step environment, it is given 16 rangefinder sensors and it’s low level is pretrained on an environment with randomly placed steps. #### Manipulation For all manipulation tasks, low-level rewards are given by L2 distance to the selected sub-goal and an additional sparse reward. | | | | | --- | --- | --- | | | rlo=−||gt−shit||2−λ||at||22+γ1{gt−f(st)<ϵ} | | High-level planning is performed every 35 steps. Again, all agents use torque control. Agents: * Point Mass: Identical to the previous point mass, just scaled to fit the environment. * 2-Link Arm: This is the standard reacher from the Open AI Gym set of environments, with end effector collisions enabled. * 3-Link Arm: A modified version of the standard 2-Link reacher with one extra degree of freedom. Each link is approximately one third the length of the arm. * 4-Link Arm: A modified version of the standard 2-Link arm, created by splitting each link evenly into two more links. Tasks: * Block Push: The arm agent has to push a block across the environment to a target end position. We test on variations of difficulty based on block position. Here, high-level observations include the position of the end effector and the position and velocity of the block. high-level rewards correspond to negative L2 distance of the block to its goal position and a sparse reward of 200 for solving the task. The high-level goal space is defined to be the desired change in the x,y position of the agent’s end effector, limited by a distance of 0.07 meters in either direction. We have two different variants of the block push task, Block Push 1, where the block must be pushed just horizontally, and Block Push 2, where the block must be pushed a shorter distance, but horizontally and vertically. * Peg Insertion: The agent now has a peg attached to it’s end effector that it must insert into a hole. high-level observations include the position of the tip of the peg and the position of the end effector. high-level rewards correspond to negative L2 distance from the final desired insertion point and a sparse reward of 50 for solving the task. For peg insertion, the high-level goal space is given from the end of peg. ### C. Training Details When training low-level policies, we only reset environment occasionally after selecting a new low-level goal to allow the agent to learn how to perform well in long-horizon settings. Low level policies are trained over longer horizons than the exact number of steps in between high level actions. For high-level training on top of pre-trained low-levels, we collect samples only when the high-level policy sets a new sub-goal. We include hyper-parameters for all low level training in Table [3](#Sx1.T3 "Table 3 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") and hyperparameters for all high level training in Table [4](#Sx1.T4 "Table 4 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). When training the discriminator for low level imitation, we anneal the learning rate linearly from its initial value to zero over the first “stop” fraction of training timesteps. This allows the agent to learn against an increasingly fixed target. Additionally, we anneal the discriminator weight in the reward function from it’s initial value to 0.1 linearly over the first 90% of training timesteps. Full parameters for the discriminators can be found in Table [5](#Sx1.T5 "Table 5 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). Additionally, we tested online and offline data collection. In offline data collection, transitions are randomly sampled from agent A’s low level policy. In online data collection, we align the goals of the two agents, such that we collect transitions of agent A reaching goal g when agent B is attempting to reach the same g. The results presented in the main paper body are exclusively from offline data collection. For KL-regularized fine-tuning, we use the same parameters across almost all experiments. We add the KL-divergence between Agent B’s policy and Agent A’s policy at every timestep. For the Waypoint task and all manipulation tasks, we use a KL weight coefficient of 1 in the loss, a learning rate of 0.01, and linearly anneal the weight of the KL loss to zero during the first 50% of training. For the Maze Task, we lowered the learning rate to 0.001 and the KL loss coefficient to 0.01. We performed a search over learning rates for regular fine-tuning, and found the original learning rate of the policy tended to perform best and as such used it for comparison. ### D. Extended zero-shot results Complete results for all zero-shot transfers for the navigation tasks can be found in Table [6](#Sx1.T6 "Table 6 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") and Table [7](#Sx1.T7 "Table 7 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer"). Table [6](#Sx1.T6 "Table 6 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") contains results for the waypoint navigation task for two more agents: the Two-Leg Ant and the Quadruped. Table [7](#Sx1.T7 "Table 7 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") has results for zero-shot transfer on the Maze task, which was completely omitted from the main section of the paper due to space constraints. For manipulation tasks, Table [8](#Sx1.T8 "Table 8 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") gives zero-shot for the block push 1 task. The original zero-shot results given in Table [1](#S4.T1 "Table 1 ‣ 4.3 How well does zero-shot hierarchical transfer work? ‣ 4 Experiments ‣ Hierarchically Decoupled Imitation for Morphological Transfer") include the full results for all the insertion task. ### E. Extended discriminative imitation results Full results for the low-level imitation experiments can be found in Table [9](#Sx1.T9 "Table 9 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") for navigation tasks transferring Point Mass to Ant and [10](#Sx1.T10 "Table 10 ‣ F. Resources ‣ Appendix ‣ Hierarchically Decoupled Imitation for Morphological Transfer") for the manipulation block push tasks. We include results for both online and offline data collection for the discriminator. We find that in many cases offline data collection performs just as good or better than online data collection. ### F. Resources Our code can be found at <https://github.com/jhejna/hierarchical_morphology_transfer> and videos of our results can be found at <https://sites.google.com/berkeley.edu/morphology-transfer>. | Agent | Timesteps | Learning Rate | Batch Size | Layers | Horizon | Reset Prob | Buffer Size | DM | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | PM (Nav) | 200000 | 0.0003 | 64 | 64 64 | 35 | 0.1 | 200000 | 4 | | Ant(s) | 2500000 | 0.0008 | 100 | 400 300 | 100 | 0.1 | 1000000 | 4 | | Quadruped | 2500000 | 0.0008 | 100 | 400 300 | 150 | 0.1 | 1000000 | 4 | | Manipulation | 1200000 | 0.0003 | 100 | 128 96 | 45 | 0.25 | 250000 | 0.07 | Table 3: Hyperparameters for low level policy training. “DM” stands for goal delta max, or the size of the goal space in each dimension sampled from during training. | Task | Timesteps | Learning Rate | Batch Size | Layers | Horizon | Buffer Size | | --- | --- | --- | --- | --- | --- | --- | | Waypoint | 200000 | 0.0003 | 64 | 64 64 | 50 | 50000 | | Maze | 400000 | 0.0003 | 64 | 64 64 | 100 | 50000 | | Block Push | 500000 | 0.0003 | 64 | 64 64 | 60 | 50000 | | Insert | 500000 | 0.0003 | 64 | 64 64 | 50 | 50000 | Table 4: Hyperparameters for high level policy training. | Agent A | Learning Rate | Batch Size | Layers | Update Freq | Weight | Stop | | --- | --- | --- | --- | --- | --- | --- | | Nav PM | 0.0002 | 64 | 42 42 | 8 | 0.3 | 0.5 | | PM, 2Link | 0.0003 | 64 | 42 42 | 8 | 0.4 | 0.5 | | 3Link | 0.0005 | 64 | 42 42 | 8 | 0.4 | 0.5 | Table 5: Hyperparameters for discriminator training. | | Point Mass High | Ant High | Ant3 High | Ant2 High | Quadruped high | | --- | --- | --- | --- | --- | --- | | Point Mass Low | 1021.49±43.25 | 602.56±33.82 | 716.61±58.12 | 593.18±60.09 | 576.65±69.6 | | Ant Low | 482.72±38.96 | 476.42±19.44 | 472.85±50.68 | 417.59±27.48 | 406.96±35.63 | | Ant3 Low | 488.62±74.35 | 483.59±71.67 | 499.19±64.99 | 471.24±65.42 | 432.29±64.59 | | Ant2 Low | 353.56±39.33 | 371.11±27.15 | 388.38±31.56 | 420.81±31.24 | 373.99±34.52 | | Quadruped Low | 169.43±33 | 182.33±21.55 | 219.57±26.61 | 267.12±18.95 | 257.13±18.83 | Table 6: Zero-Shot transfer for the way-point navigation task. | | Point Mass High | Ant High | PM High Sampled | Ant High Sampled | | --- | --- | --- | --- | --- | | Point Mass Low | 0.80±0.18 | 0.00,0.00 | 0.96±0.04 | 0.56±0.04 | | Ant Low | 0.30±0.08 | 0.16±0.14 | 0.65±0.04 | 0.5±0.08 | | Ant 3 Low | 0.58±0.13 | 0.12±0.10 | 0.84±0.04 | 0.56±0.05 | | Ant 2 Low | 0.74±0.17 | 0.14±0.13 | 0.87±0.09 | 0.60±0.05 | Table 7: Zero-Shot transfer for the maze task. Sampled indicates that goals were randomly sampled as during training, otherwise only the last goal at the end of the maze was used. | | PM high | 2 Link High | 3 Link High | 4 Link High | | --- | --- | --- | --- | --- | | PM Low | 0.99±0.01 | 0.21±0.15 | 0.33±0.18 | 0.26±0.17 | | 2 Link Low | 0.36±0.14 | 0.97±0.02 | 0.22±0.07 | 0.39±0.11 | | 3 Link Low | 0.17±0.08 | 0.2±0.09 | 0.96±0.02 | 0.49±0.14 | | 4 Link Low | 0.24±0.11 | 0.07±0.04 | 0.1±0.05 | 0.89±0.04 | Table 8: Zero-shot results for the block push 1 task. | Task | Waypoint | Maze Sampled | Maze End | | --- | --- | --- | --- | | Transfer | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 41Ant | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 41Ant | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 41Ant | | Zero-Shot | 482.72±38.96 | 0.3±0.08 | 0.65±0.04 | | Discrim Offline | 467.22±20.61 | 0.37±0.1 | 0.63±0.04 | | Discrim Online | 546.06±14.78 | 0.55±0.13 | 0.72±0.03 | Table 9: Discriminative imitation zero-shot results for the Ant. | Task | Block Push 1 | Block Push 2 | | --- | --- | --- | | Transfer | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 413Link | 2Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 413Link | 2Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | 3Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | PM\rule[-0.2pt]{3.0pt}{0.4pt}\char 413Link | 2Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 413Link | 2Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | 3Link\rule[-0.2pt]{3.0pt}{0.4pt}\char 414Link | | Zero-Shot | 0.17±.08 | 0.2±.09 | 0.07±.04 | 0.1±.05 | 0.24±.12 | 0.61±.16 | 0.19±.12 | 0.39±.16 | | Discrim Off | 0.35±.13 | 0.49±.12 | 0.28±.14 | 0.15±.04 | 0.43±.15 | .42±0.11 | 0.41±.16 | 0.41±.15 | | Discrim On | 0.34±.1 | 0.43±.15 | 0.17±.08 | 0.23±.13 | 0.43±.12 | 0.42±.11 | 0.46±.13 | 0.44±.14 | Table 10: Discriminative imitation zero-shot results for various manipulation configurations.