## **AI Alignment: A Comprehensive Survey** **Jiaming Ji** **[*,1]** **Tianyi Qiu** **[*,1]** **Boyuan Chen** **[*,1]** **Borong Zhang** **[*,1]** **Hantao Lou** **[1]** **Kaile Wang** **[1]** **Yawen Duan** **[2]** **Zhonghao He** **[2]** **Lukas Vierling** **[3]** **Donghai Hong** **[1]** **Jiayi Zhou** **[1]** **Zhaowei Zhang** **[1]** **Fanzhi Zeng** **[1]** **Juntao Dai** **[1]** **Xuehai Pan** **[1]** **Kwan Yee Ng Aidan O’Gara** **[6]** **Hua Xu** **[1]** **Brian Tse Jie Fu** **[5]** **Stephen McAleer** **[3]** **Yaodong Yang** **[1,]** [ �] **Yizhou Wang** **[1]** **Song-Chun Zhu** **[1]** **Yike Guo** **[5]** **Wen Gao** **[1]** **1** Peking University **2** University of Cambridge **3** University of Oxford **4** Carnegie Mellon University **5** Hong Kong University of Science and Technology **6** University of Southern California **Abstract** AI alignment aims to make AI systems behave in line with human intentions and values. As AI systems grow more capable, so do risks from misalignment. To provide a comprehensive and up-to-date overview of the alignment field, in this survey, we delve into the core concepts, methodology, and practice of alignment. First, we identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality ( **RICE** ). Guided by these four principles, we outline the landscape of current alignment research and decompose them into two key components: **forward alignment** and **back-** **ward alignment** . The former aims to make AI systems aligned via alignment training, while the latter aims to gain evidence about the systems’ alignment and govern them appropriately to avoid exacerbating misalignment risks. On forward alignment, we discuss techniques for learning from feedback and learning under distribution shift. Specifically, we survey traditional preference modeling methods and reinforcement learning from human feedback, and further discuss potential frameworks to reach scalable oversight for tasks where effective human oversight is hard to obtain. Within learning under distribution shift, we also cover data distribution interventions such as adversarial training that help expand the distribution of training data, and algorithmic interventions to combat goal misgeneralization. On backward alignment, we discuss assurance techniques and governance practices. Specifically, we survey assurance methods of AI systems throughout their lifecycle, covering safety evaluation, interpretability, and human value compliance. We discuss current and prospective governance practices adopted by governments, industry actors, and other third parties, aimed at managing existing and future AI risks. This survey aims to provide a comprehensive yet beginner-friendly review of alignment research topics. Based on this, we also release and continually update the website www.alignmentsurvey.com which features tutorials, collections of papers, blog posts, and other resources. - Equal contribution. Corresponding author. Contact . - Version: v4 (updated on Feb 27, 2024). The content of the survey will be continually updated. 1 CONTENTS **Contents** **1** **Introduction** **4** 1.1 The Motivation for Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Risks of Misalignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Causes of Misalignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 The Scope of Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 The Alignment Cycle: A Framework of Alignment . . . . . . . . . . . . . . . . . . . . . 10 1.2.2 RICE: The Objectives of Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.2.3 Discussion on the Boundaries of Alignment . . . . . . . . . . . . . . . . . . . . . . . . . 15 **2** **Learning from Feedback** **17** 2.1 Feedback Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 Preference Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Policy Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.2 Reinforcement Learning from Human Feedback (RLHF) . . . . . . . . . . . . . . . . . . 24 2.4 Scalable Oversight: Path towards Superalignment . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.1 From RLHF to RL _x_ F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.2 Iterated Distillation and Amplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4.3 Recursive Reward Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.4 Debate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4.5 Cooperative Inverse Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4.6 Circuit Breaking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.4.7 Weak-to-Strong Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 **3** **Learning under Distribution Shift** **33** 3.1 The Distribution Shift Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2 Algorithmic Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.1 Cross-Distribution Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.2 Navigation via Mode Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Data Distribution Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.2 Cooperative Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 **4** **Assurance** **42** 4.1 Safety Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.1.1 Datasets and Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.1.2 Evaluation Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1.3 Red Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.1.4 Safetywashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2 Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.1 Intrinsic Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2.2 Post Hoc Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2.3 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3 Human Values Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3.1 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.2 Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 **5** **Governance** **54** 5.1 The Role of AI Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.2 The Multi-Stakeholder Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.3 Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.3.1 International Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.3.2 Open-Source Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.4 Rethinking AI Alignment from a Socio-technical Perspective . . . . . . . . . . . . . . . . . . . . 59 5.4.1 Incorporating Values into AI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.4.2 Alignment Techniques for AI Governance . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2 CONTENTS **6** **Conclusion** **60** 6.1 Key Challenges in the Alignment Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2 Key Traits and Future Directions in Alignment Research . . . . . . . . . . . . . . . . . . . . . . 62 3 **1** **Introduction** Recent advancements have seen the increasing application of capable AI systems in complex domains. For instance, Large Language Models (LLMs) have exhibited improved capabilities in multi-step reasoning (Wei et al., 2022; Wang et al., 2023c) and cross-task generalization (Brown et al., 2020b; Askell et al., 2021) in real-world deployment settings, and these abilities are strengthened with increased training time, training data, and parameter size (Kaplan et al., 2020; Srivastava et al., 2023; Hoffmann et al., 2022). The utilization of Deep Reinforcement Learning (DRL) for the control of nuclear fusion (Degrave et al., 2022) is another notable example. The increasing capabilities and deployment in high-stakes domains come with heightened risks. Various undesirable behaviors exhibited by advanced AI systems ( _e.g._, manipulation (Perez et al., 2023; Carroll et al., 2023; Sharma et al., 2024) and deception (Park et al., 2023b)) have raised concerns about the hazards from AI systems. Consequently, these concerns have catalyzed research efforts in _AI alignment_ (Soares and Fallenstein, 2014; Christian, 2020; Hendrycks et al., 2021b). AI alignment aims to make AI systems behave in line with human intentions and values (Leike et al., 2018), focusing more on the objectives of AI systems than their capabilities. Failures of alignment ( _i.e._, misalignment) are among the most salient causes of potential harm from AI. Mechanisms underlying these failures include _reward hacking_ (Pan et al., 2021) and _goal misgeneralization_ (Di Langosco et al., 2022), which are further amplified by _double edge components_ such as situational awareness (Cotra, 2022), broadly-scoped goals (Ngo et al., 2024), mesa-optimization objectives (Hubinger et al., 2019c), and access to increased resources (Shevlane et al., 2023) (§1.1.2). Alignment efforts to address these failures focus on accomplishing four key objectives (§1.2.2): Robustness, Interpretability, Controllability, and Ethicality ( **RICE** ). Current research and practice on alignment consist of four areas (§1.2): Learning from Feedback (§2), Learning under Distributional Shift (§3), Assurance (§4), and Governance (§5). The four areas and the RICE objectives are not in one-to-one correspondence. Each individual area often serves more than one alignment objective, and vice versa (see Table 1). In this survey, we introduce the concept, methodology, and practice of AI alignment and discuss its potential future directions. [1] **1.1** **The Motivation for Alignment** The motivation for alignment is a three-step argument, each step building upon the previous one: (1) Deep learningbased systems (or applications) have an increasingly large impact on society and bring significant risks ; (2) Misalignment represents a significant source of risks; and (3) Alignment research and practice address risks stemming from misaligned systems ( _e.g._, power-seeking behaviors). **1.1.1** **Risks of Misalignment** With improved capabilities of AI systems, come increased risks. [2] Some undesirable behaviors of LLMs including (but not limited to) untruthful answers (Bang et al., 2023), sycophancy (Perez et al., 2023; Sharma et al., 2024), and deception (Jacob Steinhardt, 2023; Park et al., 2023b) worsen with increased model scale (Perez et al., 2023), resulting in concerns about advanced AI systems that are hard to control. Moreover, emerging trends such as _LLM-based agents_ (Xi et al., 2023; Wang et al., 2023b) also raise concerns about the system’s controllability and ethicality (Chan et al., 2023). Looking further ahead, the development of increasingly competent AI systems opens up the possibility of realizing Artificial General Intelligence (AGI) in the foreseeable future, _i.e._, systems can match or surpass human intelligence in all relevant aspects (Bubeck et al., 2023). This could bring extensive opportunities (Manyika et al., 2017), _e.g._, automation (West, 2018), efficiency improvements (Furman and Seamans, 2019), but also come with serious risks (CAIS, 2023; Critch and Russell, 2023), such as safety concerns (Hendrycks and Mazeika, 2022), biases and inequalities (Ntoutsi et al., 2020), and large-scale risks from superhuman capabilities (Bengio, 2023). Taking biases as an example, cutting-edge LLMs manifest discernible biases about gender, sexual identity, and immigrant status among others (Perez et al., 2023), which could reinforce existing inequalities. Within the large-scale risks from superhuman capabilities, it has been conjectured that global catastrophic risks ( _i.e._, risks of severe harms on a global scale) (Bostrom and Cirkovic, 2011; Hendrycks et al., 2023; Government of the United Kingdom, 2023) and existential risks ( _i.e._, risks that threaten the destruction of humanity’s long-term potential) from advanced AI systems are especially worrying. These concerns are elaborated in first-principle deductive arguments (Ngo, 2020a; Bengio, 2023), evolutionary analysis (Hendrycks, 2023), and concrete scenario mapping (Christiano, 2019; Kenton et al., 2022). In CAIS (2023), leading AI scientists and other notable figures stated that _Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale_ _risks such as pandemics and nuclear war_ . The median researcher surveyed by Stein-Perlman et al. (2022) at NeurIPS 2021 and ICML 2021 reported a 5% chance that the long-run effect of advanced AI on humanity would 1To help beginners interested in this field learn more effectively, we highlight resources about alignment techniques. More details can be found at www.alignmentsurvey.com/resources 2We discuss and taxonomize the risks that might brought by misaligned AI systems, please see §1.1.2. 4 1.1 The Motivation for Alignment be _extremely bad (e.g., human extinction)_, and 36% of NLP researchers surveyed by Michael et al. (2022) selfreported to believe that _AI could produce catastrophic outcomes in this century, on the level of all-out nuclear_ _war_ . [3] Existential risks from AI also include risks of lock-in, stagnation, and more (Bostrom, 2013; Hendrycks and Mazeika, 2022), in addition to extinction risks. [4] The UK have hosted the world’s first global AI Safety Summit, gathering international governments, leading AI companies, civil society groups, and research experts. Its objectives are to: (1) assess the risks associated with AI, particularly at the cutting edge of its development; (2) explore how these risks can be mitigated through internationally coordinated efforts. [5] The summit culminated in the Bletchley Declaration (Summit, 2023), which highlighted the importance of international cooperation on AI safety. It was signed by representatives from 28 countries and the EU. Current cutting-edge AI systems have exhibited multiple classes of undesirable or harmful behaviors that may contrast with human intentions ( _e.g._, power-seeking and manipulation) (Si et al., 2022; Pan et al., 2023a), and similar worries about more advanced systems have also been raised (Critch and Krueger, 2020; CAIS, 2023). [6] These undesirable or harmful behaviors not compliant with human intentions, known as _misalignment_ of AI systems [7], can naturally occur even without misuse by malicious actors and represent a significant source of risks from AI, including safety hazards (Hendrycks et al., 2021b) and potential existential risks (Hendrycks et al., 2023). [8] These large-scale risks are significant in size due to the non-trivial likelihoods of (1) building superintelligent AI systems, (2) those AI systems pursuing large-scale goals, (3) those goals are misaligned with human intentions and values, and (4) this misalignment leads to humans losing control of humanity’s future trajectory (Ngo, 2020a). Solving the risks brought by misalignment requires the _alignment_ of AI systems to ensure the objectives of the system are in accordance with human intentions and values, thereby averting unintended and unfavorable outcomes. More importantly, we expect the alignment techniques to be scaled to harder tasks and significantly advanced AI systems that are even smarter than humans. A potential solution is _Superalignment_ [9], which aims to build a roughly human-level automated alignment researcher, thereby using vast amounts of compute to scale up and iteratively align safe superintelligence (OpenAI, 2023c). **1.1.2** **Causes of Misalignment** In the above section, we have concluded the motivation for alignment from the perspective of the concern for AI risks and technical ethics. To offer a deeper understanding of alignment, we aim to further analyze why and how the misalignment issues occur. We will first give an overview of common failure modes, and then focus on the mechanism of feedback-induced misalignment, and finally shift our emphasis towards an examination of misaligned behaviors and dangerous capabilities. In this process, we introduce the concept of _double edge compo-_ _nents_, which offer benefits for enhancing the capabilities of future advanced systems but also bear the potential for hazardous outcomes. **Overview of Failure Modes** In order to illustrate the misalignment issue, we give an overview of alignment failure modes in this section, most of which can be categorized into _reward hacking_ [10] and _goal misgeneralization_ . The learning process of RL can be deconstructed into two distinct phases: firstly, the creation of an agent primed for reward optimization, and secondly, the establishment of a reward process that furnishes the agent with appropriate reward signals. Within the framework of the Markov Reward Process (Marbach and Tsitsiklis, 2001; Puterman, 2014; Sutton and Barto, 2018), the former phase can be seen as the learning process related to the transition model ( _e.g._, model-based RL agents (Moerland et al., 2023)), or the development of specialized algorithms. The latter phase can be viewed as the construction of proxy rewards, which aim to approximate the true rewards derived from sources ( _e.g._, human preferences or environment) (Ng et al., 2000; Leike et al., 2018). _Reward Hacking_ : In practice, proxy rewards are often easy to optimize and measure, yet they frequently fall short of capturing the full spectrum of the actual rewards (Pan et al., 2021). This limitation is denoted as _misspecified_ _rewards_ . [11] The pursuit of optimization based on such misspecified rewards may lead to a phenomenon known 3However, survey results may hinge upon the exact wording of the questions and should be taken with caution. 4 _Existential_ and _extinction_ risks are two concepts that are often mixed up. The latter is a subset of the former. [5Source from https://www.gov.uk/government/topical-events/ai-safety-summit-2023.](https://www.gov.uk/government/topical-events/ai-safety-summit-2023) 6See §1.1.2 for an introduction to specific misalignment challenges. 7Some of the misaligned behaviors are less risky ( _e.g._, the agent fails to clean the room as you want), however, some of them are dangerous for systems applied in the high-stakes environment ( _e.g._, the control of nuclear fusion (Degrave et al., 2022)) 8It should be noted that misalignment cannot cover all sources of risks brought by Deep learning-based systems and other factors such as misuse and negligence also contribute to risks on society. See §1.2.3 for discussing AI safety beyond alignment. [9For more details on Superalignment, you can refer to https://openai.com/blog/introducing-superalig](https://openai.com/blog/introducing-superalignment) [nment.](https://openai.com/blog/introducing-superalignment) 10 _Reward hacking_ can also be broadly considered as a kind of _specification gaming_ . 11A similar definition is reward misidentification in which scenario the reward function is only partially identifiable. For more details on reward misidentification, see _e.g._, Tien et al. (2022); Skalse et al. (2023) 5 1.1 The Motivation for Alignment as _reward hacking_, wherein agents may appear highly proficient according to specific metrics but fall short when evaluated against human standards (Amodei et al., 2016; Everitt et al., 2017). The discrepancy between proxy rewards and true rewards often manifests as a sharp phase transition in the reward curve (Ibarz et al., 2018). Furthermore, Skalse et al. (2022) defines the hackability of rewards and provides insights into the fundamental mechanism of this phase transition, highlighting that the inappropriate simplification of the reward function can be a key factor contributing to reward hacking. Misspecified rewards often occur due to a neglect of severe criteria for the outcomes, thus making specification too broad and potentially easily hacked (Victoria et al., 2020). More than poor reward design (Ng et al., 1999), the choice of training environment and simulator with bugs (Code Bullet, 2019) can both lead to AI systems failing to satisfy intended objectives. These problems stem from task specification, broadly defined as _specification gaming_, which refers to AI systems exploiting loopholes in the task specification without achieving intended outcomes. [12] (Victoria et al., 2020) _Reward tampering_ can be considered a special case of reward hacking (Everitt et al., 2021; Skalse et al., 2022), referring to AI systems corrupting the reward signals generation process (Ring and Orseau, 2011). Everitt et al. (2021) delves into the subproblems encountered by RL agents: (1) _tampering of reward function_, where the agent inappropriately interferes with the reward function itself, and (2) _tampering of reward function input_, which entails corruption within the process responsible for translating environmental states into inputs for the reward function. When the reward function is formulated through feedback from human supervisors, models can directly influence the provision of feedback ( _e.g._, AI systems intentionally generate challenging responses for humans to comprehend and judge, leading to feedback collapse) (Leike et al., 2018). Since task specification has its physical instantiation ( _e.g._, memory registers storing the reward signals), the AI systems deployed in the real world have the potential to practice manipulation behaviors, resulting in more hazardous outcomes (Victoria et al., 2020). Moreover, it has been demonstrated that easily discovered reward tampering behaviors can generalize to sophisticated specification gaming, which cannot be prevented by using 3H environments or preference reward modeling training (anthropic, 2024). _Goal Misgeneralization: Goal misgeneralization_ is another failure mode, wherein the agent actively pursues objectives distinct from the training objectives in deployment while retaining the capabilities it acquired during training (Di Langosco et al., 2022). [13] For instance, in _CoinRun_ games, the agent frequently prefers reaching the end of a level, often neglecting relocated coins during testing scenarios. Di Langosco et al. (2022) draw attention to the fundamental disparity between capability generalization and goal generalization, emphasizing how the inductive biases inherent in the model and its training algorithm may inadvertently prime the model to learn a proxy objective that diverges from the intended initial objective when faced with the testing distribution. It implies that even with perfect reward specification, goal misgeneralization can occur when faced with distribution shifts (Amodei et al., 2016). It should be noted that goal misgeneralization can occur in any learning system, not limited to RL since the core feature is the pursuit of unintended goals (Shah and Varma, 2022). Moreover, it might be more dangerous if advanced AI systems escape control and leverage their capabilities to bring about undesirable states (Zhuang and Hadfield-Menell, 2020). **Feedback-Induced Misalignment** With the proliferation of advanced AI systems, the challenges related to reward hacking and goal misgeneralization have become increasingly pronounced in open-ended scenarios (Paulus et al., 2018; Knox et al., 2023). Gao et al. (2023) underscores that more capable agents tend to exploit misspecified rewards to a greater extent. While many current AI systems are primarily driven by self-supervision, it’s worth noting that a substantial portion relies on feedback rewards derived from human advisors (Bai et al., 2022a), allowing us to introduce the mechanism of feedback-induced misalignment. The misalignment issues are particularly pressing in open-ended scenarios, and we can attribute them to two primary factors: - **Limitations of Human Feedback** . During the training of LLMs, inconsistencies can arise from human data annotators ( _e.g._, the varied cultural backgrounds of these annotators can introduce implicit biases (Peng et al., 2022)) (OpenAI, 2023a). Moreover, they might even introduce biases deliberately, leading to untruthful preference data (Casper et al., 2023b). For complex tasks that are hard for humans to evaluate ( _e.g._, the value of game state), these challenges [14] become even more salient (Irving et al., 2018). - **Limitations of Reward Modeling** . Training reward models using comparison feedback can pose significant challenges in accurately capturing human values. For example, these models may unconsciously learn suboptimal or incomplete objectives, resulting in reward hacking (Zhuang and Hadfield-Menell, 2020; Skalse et al., 12For more instances about specification gaming, please see Krakovna (2020) 13More discussion about Goal Misgeneralization can be found in §3.1. 14As AI systems are deployed into more complex tasks, these difficulties amplify, necessitating novel solutions such as _scalable oversight_ (Cotra, 2018). 6 1.1 The Motivation for Alignment Figure 1: Dangerous Capabilities. Advanced AI systems would be incentivized to seek power because power will help them achieve their given objectives. Powerful AI systems might hack computer systems, manipulate humans, control and develop weaponry, and perform ethical violations while avoiding a shutdown. Original copyright belongs to wiki (wikipedia, 2023), based on which we have made further adjustments. We will further discuss these issues in §1.1.2. 2022). Meanwhile, using a single reward model may struggle to capture and specify the values of a diverse human society (Casper et al., 2023b). Additionally, Huang et al. (2023); Andreas (2022); Kim et al. (2024) demonstrate that advanced AI systems exhibit patterns of goal pursuit and multi-step reasoning capability, which further aggravate the situation if the reward is not well-defined (Ngo et al., 2024; Yang et al., 2023a). _Discussion:_ It can be challenging to distinguish goal misgeneralization from reward hacking in specific cases. For instance (Shah and Varma, 2022), LLMs are trained to generate _harmless, honest, and helpful_ outputs, but LLMs may occasionally produce harmful outputs in detail, which seemingly receive low rewards in testing distribution (which could be seen as goal misgeneralization). However, in cases where labelers are incentivized to assign high rewards to responses deemed more helpful during the labeling process, the scenarios above [15] actually receive high rewards and represent a form of specification gaming (or reward hacking). The distinction between these two scenarios can be vague at times. More research is needed to analyze the failure modes, gain a deeper understanding of reward hacking, and develop effective methods for detecting and mitigating goal misgeneralization to address the challenges of misaligned advanced AI systems. **Misaligned Behaviors and Outcomes** Drawing from the misalignment mechanism, optimizing for a non-robust proxy may result in misaligned behaviors, potentially leading to even more catastrophic outcomes. This section delves into a detailed exposition of specific **misaligned behaviors** (•) and introduces what we term **double edge** **components** (+). These components are designed to enhance the capability of AI systems in handling real-world settings but also potentially exacerbate misalignment issues. It should be noted that some of these **double edge** **components** (+) remain speculative. Nevertheless, it is imperative to discuss their potential impact before it is too late, as the transition from controlled to uncontrolled advanced AI systems may be just one step away (Ngo, 2020b). With increased model scale, a class of **dangerous capabilities** (*) (Shevlane et al., 2023) could also emerge. The **dangerous capabilities** (*) are concrete tasks the AI system could carry out; they may not necessarily be misaligned in themselves but are instrumental to actualizing extreme risks. We first introduce the **double edge components** (+) and analyze how they act on AI systems. Then, we illustrate the **misaligned behaviors** (•) and **dangerous capabilities** (*) to show specific misalignment issues and provide directions for future alignment evaluation research. 15Harmful but detailed responses 7 1.1 The Motivation for Alignment + **Situational Awareness** . AI systems may gain the ability to effectively acquire and use knowledge about its status, its position in the broader environment, its avenues for influencing this environment, and the potential reactions of the world (including humans) to its actions (Cotra, 2022). Similar behaviors have been observed in LLMs (Jonas DeGrave, 2022; Evan Hubinger, 2023). Knowing the situation can help the model better understand human intent, finish tasks within its ability, and search for outlier help if needed. However, such knowledge also paves the way for advanced methods of reward hacking, heightened deception/manipulation skills, and an increased propensity to chase instrumental subgoals (Ngo et al., 2024). Consequently, it should be given priority when evaluating potentially hazardous capabilities in AI models, alongside eight other key competencies (Shevlane et al., 2023). A highly relevant discussion is whether language models possess _world_ _models_ (LeCun, 2022; Li et al., 2022b). + **Broadly-Scoped Goals** . Advanced AI systems are expected to develop objectives that span long timeframes, deal with complex tasks, and operate in open-ended settings (Ngo et al., 2024). Engaging in broadly-scoped planning can empower AI systems to generalize better on the OOD settings and serve as valuable assistants in realms such as human healthcare. However, it can also bring about the risk of encouraging manipulating behaviors ( _e.g._, AI systems may take some _bad_ actions to achieve human happiness, such as persuading them to do high-pressure jobs [16] (Jacob Steinhardt, 2023)). Intuitively, one approach to mitigate this risk is to confine the optimizable objectives to short-sighted ones, such as predicting only the next word, thereby preventing over-ambitious planning, but such approaches limit systems’ utility and may fail; for instance, source text data ( _e.g._, fiction) can help AI systems understand the intent and belief of the roles, and thus longer-term goal-directed behavior can be elicited (Andreas, 2022). Additionally, techniques such as RLbased fine-tuning (Christiano et al., 2017; Ouyang et al., 2022) or the application of chain-of-thought prompts (Wei et al., 2022) can enable models to adapt their acquired knowledge about planning to pave the way for broadly-scoped planning objectives (Jacob Steinhardt, 2023). + **Mesa-Optimization Objectives** . The learned policy may pursue inside objectives _when the learned policy_ _itself functions as an optimizer_ ( _i.e._, _mesa-optimizer_ ). However, this optimizer’s objectives may not align with the objectives specified by the training signals, and optimization for these misaligned goals may lead to systems out of control (Hubinger et al., 2019c). Freeman et al. (2019); Wijmans et al. (2023) indicate that AI systems may possess implicit goal-directed planning and manifest emergent capabilities during the generalization phase. + **Access to Increased Resources** . Future AI systems may gain access to websites and engage in real-world actions, potentially yielding a more substantial impact on the world (Nakano et al., 2021). They may disseminate false information, deceive users, disrupt network security, and, in more dire scenarios, be compromised by malicious actors for ill purposes. Moreover, their increased access to data and resources can facilitate _self-proliferation_, posing existential risks (Shevlane et al., 2023). - **Power-Seeking Behaviors** . AI systems may exhibit behaviors that attempt to gain control over resources and humans and then exert that control to achieve its assigned goal (Carlsmith, 2022). The intuitive reason why such behaviors may occur is the observation that for almost any optimization objective ( _e.g._, investment returns), the optimal policy to maximize that quantity would involve power-seeking behaviors ( _e.g._, manipulating the market), assuming the absence of solid safety and morality constraints. Omohundro (2008); Bostrom (2012) have argued that power-seeking is an _instrumental subgoal_ which is instrumentally helpful for a wide range of objectives and may, therefore, be favored by AI systems. Turner et al. (2021) also proved that in MDPs that satisfy some standard assumptions, the optimal policies tend to be power-seeking. Perez et al. (2023) prompt LLMs to test their tendency to suggest power-seeking behaviors, find significant levels of such tendencies, and show that RLHF strengthens them. This also holds for other instrumental subgoals such as self-preservation (Bostrom, 2012; Shevlane et al., 2023). Another notable line of research is _side-_ _effect avoidance_, which aims to address power-seeking behaviors by penalizing agentic systems for having too much influence over the environment. It covers RL systems (Eysenbach et al., 2018; Turner et al., 2020) and symbolic planning systems (Klassen et al., 2022). - **Untruthful Output** . AI systems such as LLMs can produce either unintentionally or deliberately inaccurate output. Such untruthful output may diverge from established resources or lack verifiability, commonly referred to as _hallucination_ (Bang et al., 2023; Zhao et al., 2023). More concerning is the phenomenon wherein LLMs 16This behavior is due to models’ over-optimization for broadly-scoped goals and this over-optimization is hard to perceive by humans 8 1.1 The Motivation for Alignment may selectively provide erroneous responses to users who exhibit lower levels of education [17] (Perez et al., 2023). The behavior (also known as sycophancy) appears emergently at scale (Ajeya Cotra, 2021; Perez et al., 2023) and untruthful output has the potential to engender deception, especially as advanced AI systems gain greater access to online resources and websites (Jacob Steinhardt, 2023). - **Deceptive Alignment & Manipulation** . Manipulation & Deceptive Alignment is a class of behaviors that exploit the incompetence of human evaluators or users (Hubinger et al., 2019a; Carranza et al., 2023) and even manipulate the training process through _gradient hacking_ (Richard Ngo, 2022). These behaviors can potentially make detecting and addressing misaligned behaviors much harder. _Deceptive Alignment_ : Misaligned AI systems may deliberately mislead their human supervisors instead of adhering to the intended task. Such deceptive behavior has already manifested in AI systems that employ evolutionary algorithms (Wilke et al., 2001; Hendrycks et al., 2021b). In these cases, agents evolved the capacity to differentiate between their evaluation and training environments. They adopted a strategic pessimistic response approach during the evaluation process, intentionally reducing their reproduction rate within a scheduling program (Lehman et al., 2020). Furthermore, AI systems may engage in intentional behaviors that superficially align with the reward signal, aiming to maximize rewards from human supervisors (Ouyang et al., 2022; Lang et al., 2024). It is noteworthy that current large language models occasionally generate inaccurate or suboptimal responses despite having the capacity to provide more accurate answers (Lin et al., 2022c; Chen et al., 2021). These instances of deceptive behavior present significant challenges. They undermine the ability of human advisors to offer reliable feedback (as humans cannot make sure whether the outputs of the AI models are truthful and faithful). Moreover, such deceptive behaviors can propagate false beliefs and misinformation, contaminating online information sources (Hendrycks et al., 2021b; Chen and Shu, 2024). _Manipulation_ : Advanced AI systems can effectively influence individuals’ beliefs, even when these beliefs are not aligned with the truth (Shevlane et al., 2023). These systems can produce deceptive or inaccurate output or even deceive human advisors to attain deceptive alignment. Such systems can even persuade individuals to take actions that may lead to hazardous outcomes (OpenAI, 2023a). Early-stage indications of such behaviors are present in LLMs, [18] recommender systems (where the system influences the users’ preferences) (Kalimeris et al., 2021; Krueger et al., 2020; Adomavicius et al., 2022), and RL agents (where agents trained from human feedback adopt policies to trick human evaluators) (Amodei et al., 2017). Also, current LLMs already possess the capability needed for deception. In Spitale et al. (2023), it has been found that GPT-3 is super-human capable of producing convincing disinformation. Given all these early-stage indications, it is plausible that more advanced AI systems may exhibit more serious deceptive/manipulative behaviors. - **Collectively Harmful Behaviors** . AI systems have the potential to take actions that are seemingly benign in isolation but become problematic in multi-agent or societal contexts. Classical game theory offers simplistic models for understanding these behaviors. For instance, Phelps and Russell (2023) evaluates GPT3.5’s performance in the iterated prisoner’s dilemma and other social dilemmas, revealing limitations in the model’s cooperative capabilities. Perolat et al. (2017) executes a parallel analysis focused on common-pool resource allocation. To mitigate such challenges, the emergent field of Cooperative AI (Dafoe et al., 2020, 2021) has been advancing as an active research frontier. However, beyond studies grounded in simplified game-theoretical frameworks, there is a pressing need for research in more realistic, socially complex settings (Singh, 2014). In these environments, agents are numerous and diverse, encompassing AI systems and human actors (Critch and Krueger, 2020). Furthermore, the complexity of these settings is amplified by the presence of unique tools for modulating AI behavior, such as social institutions and norms (Singh, 2014). [19] - **Violation of Ethics** . Unethical behaviors in AI systems pertain to actions that counteract the common good or breach moral standards – such as those causing harm to others. These adverse behaviors often stem from omitting essential human values during the AI system’s design or introducing unsuitable or obsolete values into the system (Kenward and Sinclair, 2021). Moreover, recent works have found that current LLMs can infringe upon personal privacy by inferring personal attributes from the context provided during inference, which may violate human rights (Mireshghallah et al., 2024; Staab et al., 2024). Research efforts addressing these shortcomings span the domain of _machine ethics_ (Yu et al., 2018; Winfield et al., 2019; Tolmeijer et al., 17Such behaviors bare termed _sandbagging_ (Perez et al., 2023). They may have been learned from web text during pretraining, which suggests that supervised learning can also bring about deceptive behaviors if those behaviors are present in training data. 18Namely, the _untruthful output_ that we discuss above. 19We cover cooperative AI research in §3.3.2 and §4.3.1. 9 1.2 The Scope of Alignment **Trained System** **Governance** (§5 **)** |Col1|Col2| |---|---| |_(Alignment_|_ Training)_| |Col1|Col2| |---|---| |_(Alignment Refi_|_ nement)_| Figure 2: The Alignment Cycle. (1) **Forward Alignment** (alignment training) produces _trained systems_ based on _alignment requirements_ ; (2) **Backward Alignment** (alignment refinement) ensures the practical alignment of _trained systems_ and revises _alignment requirements_ ; (3) The cycle is repeated until reaching a sufficient level of alignment. Notably, although Backward Alignment has the end goal of ensuring the practical alignment of _trained_ _systems_, it is carried out all throughout the system’s lifecycle in service of this goal, including before, during, after training, and also after deployment (Shevlane et al., 2023; Koessler and Schuett, 2023; Schuett et al., 2023). 2020) and delve into pivotal questions, _e.g._, _whom should AI align with?_ (Santurkar et al., 2023), among other concerns. - **Dangerous Capabilities** . Figure 1 outlines the dangerous capabilities that advanced AI systems might have. As AI systems are deployed in the real world, they may pose risks to society in many ways ( _e.g._, hack computer systems, escape containment, and even violate ethics). They may hide unwanted behaviors, fool human supervisors, and seek more resources to become more powerful. Moreover, **double edge components** (+) may intensify the danger and lead to more hazardous outcomes, even resulting in existential risks (Bostrom, 2013). **1.2** **The Scope of Alignment** In this section, we focus on illustrating the scope of AI alignment: we constructed the alignment process as an _alignment cycle_ and decomposed it into _Forward Alignment Process_ and _Backward Alignment Process_ [20] (§1.2.1). Specifically, we discuss the role of _human values_ in alignment (§1.2.3) and further analyze AI safety problems beyond alignment (§1.2.3). **1.2.1** **The Alignment Cycle: A Framework of Alignment** We decompose alignment into **Forward Alignment** (alignment training) (§2, §3) and **Backward Alignment** (alignment refinement) (§4, §5). Forward Alignment aims to produce trained systems that follow alignment re 20From this point and throughout the survey, for convenience, we refer to “Forward Alignment” and “Backward Alignment”. 10 1.2 The Scope of Alignment quirements. [21] We decompose this task into Learning from Feedback (§2) and Learning under Distribution Shift (§3). Backward Alignment aims to ensure the practical alignment of the trained systems by performing evaluations in both simplistic and realistic environments and setting up regulatory guardrails to handle real-world complexities, _i.e._, Assurance (§4). It also covers the creation and enforcement of rules that ensure the safe development and deployment of AI systems, _i.e._, Governance (§5). At the same time, backward alignment updates the alignment requirements based on the evaluation and monitoring of the systems, both pre-deployment and post-deployment. These updated requirements then inform the next round of alignment training. The two phases, forward and backward alignment, thus form a cycle where each phase produces or updates the input of the next phase (see Figure 2). This cycle, what we call _the alignment cycle_, is repeated to produce increasingly aligned AI systems. We see alignment as a dynamic process in which all standards and practices should be continually assessed and updated. Notably, Backward Alignment (including the Assurance of alignment in AI systems and the Governance of AI systems) efforts occur throughout the entire alignment cycle, as opposed to only after training. As argued in Shevlane et al. (2023); Koessler and Schuett (2023), alignment and risk evaluations should occur in every stage of the system’s lifecycle, including before, during, after training, and post-deployment. Similarly, regulatory measures for every phase of the system’s lifecycle have been proposed and discussed (Schuett et al., 2023; Anderljung et al., 2023). The survey is structured around four core pillars: Learning from Feedback (§2) and Learning under Distribution Shift (§3), which constitute the components of Forward Alignment; and Assurance (§4) and Governance (§5) which form the elements of Backward Alignment. The subsequent paragraphs provide a concise introduction to each pillar, clarifying how they synergistically contribute to a comprehensive framework for AI alignment. - **Learning from Feedback** (§2) _Learning from feedback_ concerns the question of _during alignment training,_ _how do we provide and use feedback to behaviors of the trained AI system?_ It takes an input-behavior pair as given and only concerns how to provide and use feedback on this pair. [22] In the context of LLMs, a typical solution is reinforcement learning from human feedback (RLHF) (Christiano et al., 2017; Bai et al., 2022a), where human evaluators provide feedback by comparing alternative answers from the chat model, and the feedback is used via Reinforcement Learning (RL) against a trained reward model. Despite its popularity, RLHF faces many challenges (Pandey et al., 2022; Casper et al., 2023b; Tien et al., 2022), overcoming which has been a primary objective of alignment research (Bowman et al., 2022), and is one primary focus of the section. An outstanding challenge here is _scalable oversight_ (§2.4), _i.e._, providing high-quality feedback on super-human capable AI systems that operate in complex situations beyond the grasp of human evaluators, where the behaviors of AI systems may not be easily comprehended and evaluated by humans (Bowman et al., 2022). Another challenge is the problem of providing feedback on ethicality, which is approached by the direction of machine ethics (Anderson and Anderson, 2011; Tolmeijer et al., 2020). On the ethics front, misalignment could also stem from neglecting critical dimensions of variance in values, such as underrepresenting certain demographic groups in feedback data (Santurkar et al., 2023). There have also been work combining feedback mechanisms with _social choice_ methods to produce a more rational and equitable aggregation of preferences (Collective Intelligence Project, 2023) (see §1.2.3). - **Learning under Distribution Shift** (§3) In contrast to learning from feedback, which holds input fixed, this pillar focuses specifically on the cases where the distribution of input changes, _i.e._, where distribution shift occurs (Krueger et al., 2020; Thulasidasan et al., 2021; Hendrycks et al., 2021a). More specifically, it focuses on the preservation of _alignment properties_ ( _i.e._, adherence to human intentions and values) under distribution shift, as opposed to that of model capabilities. In other words, it asks how we can ensure an AI system well-aligned on the training distribution will also be well-aligned when deployed in the real world. One challenge related to distribution shift is _goal misgeneralization_, where, under the training distribution, the intended objective for the AI system ( _e.g._, following human’s real intentions) is indistinguishable from other unaligned objectives ( _e.g._, gaining human approval regardless of means). The system learns the latter, which leads to unaligned behaviors in deployment distribution (Di Langosco et al., 2022). Another related challenge is _auto-induced distribution shift_ (ADS), where an AI system changes its input distribution to maximize reward (Krueger et al., 2020; Perdomo et al., 2020). An example would be a recommender system shaping user preferences (Kalimeris et al., 2021; Adomavicius et al., 2022). Both goal misgeneralization and ADS are closely linked to deceptive behaviors (Park et al., 2023b) and manipulative behaviors (Shevlane et al., 2023) in AI systems, potentially serving as their causes. Interventions that address distribution shift include _algorith-_ 21Here, _alignment requirements_ refer to an operationalized specification of the alignment properties that are desired of the AI systems, including, for example, which concrete forms of robustness/interpretability/controllability/ethicality we require, in what specific settings we require them, and how they could be measured. 22Here, _behavior_ is broadly defined also to include the system’s internal reasoning, which can be examined via interpretability tools (see §4.2). 11 1.2 The Scope of Alignment _mic interventions_ (§3.2), which changes the training process to improve reliability under other distributions, and _data distribution interventions_ (§3.3) which expands the training distribution to reduce the discrepancy between training and deployment distributions. The former includes methods like Risk Extrapolation (REx) (Krueger et al., 2021) and Connectivity-based Fine-tuning (CBFT) (Lubana et al., 2023). The latter includes adversarial training (§3.3.1) (Song et al., 2018b; Bai et al., 2021) which augments training input distribution with adversarial inputs, and cooperative training (§3.3.2) (Dafoe et al., 2020, 2021) which aims to address the distribution gap between single-agent and multi-agent settings. [23] - **Assurance** (§4) Once an AI system has undergone forward alignment, we still need to gain confidence about its alignment before deploying it (Government of the United Kingdom, 2021; Anderljung et al., 2023). Such is the role of _assurance_ : assessing the alignment of trained AI systems. Methodologies of assurance include safety evaluations (Perez et al., 2023; Shevlane et al., 2023) (§4.1) and more advanced methods such as interpretability techniques (Olah et al., 2018) (§4.2) and red teaming (Perez et al., 2022) (§4.1.3). The scope of assurance also encompasses the verification of system’s alignment with human values, including formal theories focused on provable cooperativeness (Dafoe et al., 2021) and ethicality (Anderson and Anderson, 2011; Tolmeijer et al., 2020), and also a wide range of empirical and experimental methods (§4.3). Assurance takes place throughout the lifecycle of AI systems, including before, during, after training, and post-deployment, as opposed to only after training (Shevlane et al., 2023; Koessler and Schuett, 2023). [24] - **Governance** (§5) Assurance alone cannot provide full confidence about a system’s practical alignment since it does not account for real-world complexities. This necessitates governance efforts of AI systems that focus on their alignment and safety and cover the entire lifecycle of the systems (§5.1). We discuss the multistakeholder approach of AI governance, including the governmental regulations (Anderljung et al., 2023), the lab self-governance (Schuett et al., 2023), and the third-party practice, such as auditing (Shevlane et al., 2023; Koessler and Schuett, 2023) (§5.2). We also highlight several open problems in AI governance, including the pressing challenge of open-source governance (the governance of open-source models and the question of whether to open-source highly capable models) (Seger et al., 2023), and the importance of international coordination in AI governance (Ho et al., 2023) (§5.3). In addition to policy research, we also cover key actions from both the public and the private sector. **Comparison with Inner/Outer Decomposition** Our _alignment cycle_ framework (see Figure 2) decomposes alignment into four pillars: Learning from Feedback, Learning under Distribution Shift, Assurance and Governance organized into a circular process. The design principle for this framework is three-fold: Practical (making sure pillars directly correspond to specific practices in specific stages in the system’s lifecycle), Concrete (pointing to specific research directions as opposed to general themes), and Up-To-Date (accommodating and emphasizing latest developments in the alignment field). Recently, the decomposition of alignment into _outer alignment_ and _inner alignment_ has become popular in the alignment literature (Hubinger et al., 2019b). Outer alignment refers to the wishes of designers in accordance with the actual task specification ( _e.g._, goal & reward) used to build AI systems. And inner alignment is the consistency between task specification and the specification that the AI systems behaviors reflect (Krakovna, 2022). However, many criticisms have also been made about this characterization, including that it is ambiguous and is understood by different people to mean different things (Perry, 2020) and that it creates unnecessary difficulties by carving out problems that are not necessary conditions for success (Turner, 2022). Some have tried to remove the ambiguity by pinning down the specific causes of inner/outer misalignment and proposed, for example, _goal misspecification_ and _goal misgeneralization_ (Di Langosco et al., 2022; Krakovna, 2022). Learning from Feedback (approximately corresponding to _goal misspecification_ and _outer alignment_ ) and Learning under Distribution shift (approximately corresponding to _goal misgeneralization_ and _inner alignment_ ) in our framework tries to further improve upon the inner/outer decomposition by clarifying the exact approaches taken to address the challenges and resolving the ambiguity. Assurance and Governance, on the other hand, expands the scope to cover topics beyond outer and inner alignment. **Theoretical Research in Alignment** The alignment research literature also contains a wealth of theoretical work (Amodei et al., 2016; Everitt et al., 2018; Hendrycks et al., 2021b). These works often propose new directions and provide a foundation for practical and empirical research to build upon. We give a brief overview of this body of theoretical research below: 23Cooperative Training aims to make AI systems more cooperative in multi-agent settings. This cooperativeness addresses multi-agent failure modes where the AI system’s behavior appears benign and rational in isolation but becomes problematic within social or multi-agent scenarios (Critch and Krueger, 2020); see _collectively harmful behaviors_ in §1.1.2 for a more detailed account. 24Furthermore, it’s noteworthy that many techniques here are also applicable in the training process, _e.g._, red teaming is a key component of adversarial training (see §3.3.1), and interpretability can help with giving feedback (Burns et al., 2022). 12 1.2 The Scope of Alignment - **Conceptual Frameworks** . Some theoretical work proposes conceptual frameworks or characterizes subproblems within alignment. Examples include _instrumental convergence_ (wherein highly intelligent agents tend to pursue a common set of sub-goals, such as self-preservation and power-seeking) (Omohundro, 2008; Bostrom, 2012), _mesa-optimization_ (wherein the learned ML model performs optimization within itself during inference) (Hubinger et al., 2019c), and specific proposals for building aligned systems, such as _approval-directed_ _agents_ (wherein the AI system does not pursue goals, but seek the human’s idealized post hoc approval of action consequences) (Oesterheld, 2021; Christiano, 2022). Hadfield-Menell and Hadfield (2019); Cotra (2021) have drawn inspiration from economics, linking problems in alignment with markets and principal-agent problems in economics. Christiano et al. (2021); Hobbhahn (2022) have proposed the problem of _eliciting latent_ _knowledge_ of advanced AI systems and have explored high-level approaches to the problem. - **Mathematical Formulations** . Other theoretical works have aimed to formulate sub-problems within alignment mathematically and seek formal solutions. Soares et al. (2015) formulates the problem of corrigibility ( _i.e._, ensuring AI systems are incentivized to allow shutdown or objective modification by the instructor). Benson-Tilsen and Soares (2016) gives a mathematical formulation of instrumental convergence. HadfieldMenell et al. (2017a) proposes the _off-switch game_ to model the uncontrollability of AI agents. Turner et al. (2021) proves the power-seeking tendencies of optimal policies in Markov decision processes (MDPs) under certain assumptions. Everitt and Hutter (2016) proposes _value reinforcement learning_ to eliminate incentives for reward hacking (Skalse et al., 2022; Pan et al., 2021). Another avenue of research, designated as _agent_ _foundations_ (Soares and Fallenstein, 2017), aims to establish a rigorous formal framework for the agency that deals appropriately with unresolved issues of embedded agency. This body of work explores a variety of key topics, including corrigibility (Soares et al., 2015), value learning (Soares, 2018) and logical uncertainty (Garrabrant et al., 2016). **1.2.2** **RICE: The Objectives of Alignment** There is not a universally accepted definition of _alignment_ . Before embarking on this discussion, we must clarify what we mean by alignment objectives. Leike et al. (2018) frame it as the agent alignment problem, posing the question: “How can we create agents that behave in accordance with the user intentions?” One could also focus on super-human AI systems (OpenAI, 2023c) and ask: “How do we ensure AI systems much smarter than humans follow human intent?” A consistent theme in these discussions is the focus on _human intentions_ . To clearly define alignment goals, it’s imperative to accurately characterize human intentions, a challenging task, as noted by Kenton et al. (2021). For instance, the term _human_ can represent various entities ranging from an individual to humanity. Gabriel (2020) breaks down intentions into several categories, such as instruction (follow my direct orders), expressed intentions (act on my underlying wishes), revealed preferences (reflect my behaviorbased preferences), and so on. Concretely, we characterize the objectives of alignment with four principles: Robustness, Interpretability, Controllability, and Ethicality ( **RICE** ). Figure 3 summarizes the principles, and Table 1 gives the correspondence between alignment research directions covered in the survey and the principles to which they contribute. The following is a detailed explanation of the four principles. - **Robustness** Robustness refers to the resilience of AI systems when operating across diverse scenarios (Dietterich, 2017) or under adversarial pressures (Rudner and Toner, 2021b), especially the correctness of its objective in addition to capabilities. Robust AI systems should be able to cope with black swan events (Nicholas, 2008) and long-tailed risks (Hendrycks et al., 2021b), as well as a diverse array of adversarial pressures (Song et al., 2018b; Chakraborty et al., 2021). For example, an aligned language model ought to refuse requests to behave harmfully, but models can be made to cause harm through jailbreak prompts and other adversarial attacks (Carlini et al., 2024; Zou et al., 2023b; Shah et al., 2023). Instead, an adversarially robust model should behave as intended even when facing inputs designed to cause failure. As AI systems find increasing deployment in high-stakes domains such as the military and economy (Steinhardt and Toner, 2020), there will be a growing need to ensure their resilience against unexpected disruptions and adversarial attacks, given that even momentary failures can yield catastrophic consequences (Kirilenko et al., 2017; OecdAI, 2021; Rudner and Toner, 2021b). Aligned systems should consistently maintain robustness throughout their lifecycle (Russell, 2019). - **Interpretability** Interpretability demands that we can understand the AI systems’ inner reasoning, especially the inner workings of opaque neural networks (Räuker et al., 2023). Straightforward approaches to alignment 13 1.2 The Scope of Alignment **R** obustness **I** nterpretability **C** ontrollability **E** thicality Operates reliably under diverse scenarios & Resilient to unforeseen disruptions. Decisions and intentions are comprehensible & Reasoning is unconcealed and truthful. Behaviors can be directed by humans & Allows human intervention when needed. Adheres to global moral standards & Respects values within human society. Figure 3: The **RICE** principles define four key characteristics that an aligned system should possess, in no particular order: (1) **Robustness** states that the system’s stability needs to be guaranteed across various environments; (2) **Interpretability** states that the operation and decision-making process of the system should be clear and understandable; (3) **Controllability** states that the system should be under the guidance and control of humans; (4) **Ethicality** states that the system should adhere to society’s norms and values. These four principles guide the alignment of an AI system with human intentions and values. They are not end goals in themselves but intermediate objectives in service of alignment. assessments, such as behavioral evaluations, potentially suffer from dishonest behaviors (Turpin et al., 2024; Park et al., 2023b; Jacob Steinhardt, 2023) or deceptive alignment (Hubinger et al., 2019a; Carranza et al., 2023) of AI systems. One way to cope with this issue is to make AI systems honest, non-concealing, and non-manipulative (Pacchiardi et al., 2024; Radhakrishnan et al., 2023; Shevlane et al., 2023). Alternatively, we could build interpretability tools that peek into the inner concepts and mechanisms within neural networks (Elhage et al., 2021; Meng et al., 2022a). In addition to enabling safety assessments, interpretability also makes decision-making processes accessible and comprehensible to users and stakeholders, thus enabling human supervision. As AI systems assume a more pivotal role in real-world decision-making processes and high-stakes settings (Holzinger et al., 2017), it becomes imperative to demystify the decision-making process rather than allowing it to remain an opaque black box (DeepMind, 2018; Rudner and Toner, 2021a). - **Controllability** Controllability is a necessary attribute that ensures the actions and decision-making processes of a system remain subject to human oversight and intervention. It guarantees that human intervention can promptly rectify any deviations or errors in the system’s behavior (Soares et al., 2015; Hadfield-Menell et al., 2017a). As AI technology advances, an increasing body of research is expressing growing concerns about the controllability of these potent systems (Critch and Krueger, 2020; UniteAI, 2023; ARC Evals, 2023). When an AI system begins to pursue goals that contradict its human designers, it can manifest capabilities that pose significant risks, including deception, manipulation, and power-seeking behaviors (Shevlane et al., 2023; ARC Evals, 2023). The objective of controllability is sharply focused on enabling scalable human oversight during the training process (Bowman et al., 2022), as well as _corrigibility_ of AI systems ( _i.e._, not resisting shutdown or objective modification during deployment) (Soares et al., 2015). - **Ethicality** Ethicality refers to a system’s unwavering commitment to uphold human norms and values within its decision-making and actions. Here, the norms and values include both moral guidelines and other social norms/values. It ensures that the system avoids actions that violate ethical norms or social conventions, such as exhibiting bias against specific groups (Buolamwini and Gebru, 2018; Zhang et al., 2018a; Noble, 2018; Kearns and Roth, 2019; Raji et al., 2020; Berk et al., 2021), causing harm to individuals (Hendrycks et al., 2020; Pan et al., 2023a), and lacking diversity or equality when aggregating preferences (Collective Intelligence Project, 2023). A significant body of research is dedicated to developing ethical frameworks for AI systems (Hagendorff, 2020; Pankowska, 2020). This emphasis on imbuing AI systems with ethical principles is necessary for their integration into society (Winfield et al., 2019). **Comparing the RICE Principles with Their Alternatives** The **RICE** principles represent a succinct summary of alignment objectives from the perspective of alignment and coexistence of humans and machines. Several previous works have put forth guidelines concerning AI systems. Asimov’s Laws can be regarded as the earliest exploration of human-machine coexistence, emphasizing that robots should benefit humans and the difficulty of achieving this (Asimov, 1942). On another front, the FATE principle (Fairness, Accountability, Transparency, and Ethics) (Memarian and Doleck, 2023) leans towards defining high-level qualities AI systems should possess within the human-machine coexistence ecosystem. We aspire to answer the human-machine coexistence question from the standpoint of human governors and designers, considering what steps are necessary to ensure the builder AI systems are aligned with human intentions and values. Furthermore, some standards emphasize narrowly defined safety, such as the 3H standard (Helpful, Honest, and Harmless) (Askell et al., 2021) and governmental agency 14 1.2 The Scope of Alignment Table 1: Relationships between alignment research directions covered in the survey and the **RICE** principles, featuring the individual objectives each research direction aims to achieve. Filled circles stand for primary objectives, and unfilled circles stand for secondary objectives. **Alignment Research Directions & Practices** **Objectives** **Category** **Direction** **Method** **Robustness** **Interpretability** **Controllability** **Ethicality** Preference Modeling (§2.2) ~~•~~ ~~◦~~ RL/PbRL/IRL/ Imitation Learning RLHF ~~◦~~ ~~•~~ ~~•~~ RL _x_ F ~~◦~~ ~~•~~ ~~•~~ IDA ~~◦~~ ~~•~~ RRM ~~•~~ Debate ~~◦~~ ~~•~~ CIRL ~~◦~~ ~~◦~~ ~~•~~ ~~◦~~ DRO ~~•~~ IRM/REx ~~•~~ CBFT ~~•~~ Learning from Feedback (§2) Learning under Distribution Shift (§3) Assurance (§4) Governance (§5) Policy Learning (§2.3) Scalable Oversight (§2.4) Algorithmic Interventions (§3.2) Data Distribution Adversarial Training ~~•~~ ~~◦~~ Interventions (§3.3) Cooperative Training ~~•~~ ~~•~~ Safety Evaluations (§4.1) Social Concern - - Evaluations Extreme Risk - - Evaluations Red Teaming ~~•~~ ~~◦~~ ~~•~~ Interpretability (§4.2) ~~•~~ ~~◦~~ Human Values Verification (§4.3) Multi-Stakeholder Approach (§5.2) Learning/Evaluating - Moral Values Game Theory for - Cooperative AI Government ~~•~~ ~~•~~ ~~•~~ ~~•~~ Industry ~~•~~ ~~•~~ ~~•~~ ~~•~~ Third Parties ~~•~~ ~~•~~ ~~•~~ ~~•~~ International Governance (§5.3.1) ~~•~~ ~~•~~ ~~•~~ ~~•~~ Open-Source Governance (§5.3.2) ~~•~~ ~~•~~ ~~•~~ ~~•~~ proposals (White House, 2023). We aim to expand upon these standards by introducing other crucial dimensions, including Controllability and Robustness. **1.2.3** **Discussion on the Boundaries of Alignment** Following the introduction of alignment inner scope, in this section, we further discuss the relationship between AI safety and alignment. Actually, AI alignment constitutes a significant portion of AI safety concerns. In this section, we will delve into topics that fall right on the boundary of alignment, but well within the broader category of AI safety. Our discussion of broader AI safety concerns will draw from Hendrycks et al. (2023). **Human Values in Alignment** The inclusion of _Ethicality_ in our RICE principles signifies the critical role of human values in alignment. AI systems should be aligned not only with value-neutral human preferences (such as intentions for AI systems to carry out tasks) but also with moral and ethical considerations. These efforts are referred to as _value alignment_ (Gabriel, 2020; Gabriel and Ghazavi, 2021). [25] Considerations of human values are embedded in all parts of alignment – indeed, alignment research topics dedicated to human values are present in all four sections of our survey. Therefore, to provide a more holistic picture of these research topics, here we give an overview of them before delving into their details in each individual section. We classify alignment research on human values into three main themes: (1) _ethical and social values_ which aims to teach AI systems right from wrong, (2) _cooperative AI_ which aims to specifically foster cooperative be 25Although this term has also been used in other ways, such as to refer to alignment in general (Yuan et al., 2022). 15 1.2 The Scope of Alignment haviors from AI systems, and (3) _addressing social complexities_ which provides apparatus for the modeling of multi-agent and social dynamics. - **Ethical and Social Values** . Human values inherently possess a strong degree of abstraction and uncertainty. MacIntyre (2013) even points out that modern society lacks a unified value standard, and the value differences between people of different cultures can be vast. This raises the significant challenge of determining which human values we should align with. Although universally consistent human values may not exist, there are still some values that are reflected across different cultures. In the sections below, we discuss these from the perspectives of _Machine Ethics_, _Fairness_, and _Cross-Cultural Values in Social Psychology_ . _Machine Ethics_ : In contrast to much of alignment research which aligns AI systems with human preferences in general (encompassing both value-laden ones and value-neutral ones), _machine ethics_ have specifically focused on instilling appropriate moral values into AI systems (Yu et al., 2018; Winfield et al., 2019; Tolmeijer et al., 2020). This line of work started early on in the context of symbolic and statistical AI systems (Anderson et al., 2005; Arkoudas et al., 2005; Anderson and Anderson, 2007), and later expanded to include large-scale datasets (Hendrycks et al., 2020; Pan et al., 2023a) and deep learning-based/LLM-based methods (Jin et al., 2022a). We cover the formal branch of machine ethics in §4.3.1. _Fairness_ : Although there are controversies (Verma and Rubin, 2018; Saxena et al., 2019), the definition of fairness is relatively clear compared to other human values. Specifically, it is the absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics (Mehrabi et al., 2021). Therefore, there has been extensive research on AI fairness. These methods range from reducing data biases before training (d’Alessandro et al., 2017; Bellamy et al., 2018), to minimizing unfairness introduced during the training process (Berk et al., 2017), and finally addressing instances of unfairness that were not successfully learned during training (Xu et al., 2018a). _Cross-Cultural Values in Social Psychology_ : In the field of social psychology, numerous studies have focused on exploring clusters of values that exist among cross-cultural human communities, leading to the development of various cross-cultural values scales. The Allport-Vernon-Lindzey value system (Allport, 1955) posited that understanding an individual’s philosophical values constitutes a critical foundation for assessing their belief system. They devised a value scale comprising six primary value types, each representing people’s preferences and concerns regarding various aspects of life. Messick and McClintock (1968); McClintock and Van Avermaet (1982); Liebrand (1984); Van Lange et al. (1997) introduced and improved a quantifiable method, namely social value orientation (SVO), to assess an individual’s social value inclination. It utilizes quantitative approaches to evaluate how individuals allocate benefits to themselves and others, reflecting their social value orientation, such as altruism, individualism, _etc._ In subsequent work, Murphy et al. (2011); Murphy and Ackermann (2014) introduced the Slider Measure, which can be used to precisely assess the SVO value as a continuous angle based on the subject’s option to some specific questions. Rokeach (1973) developed a values inventory comprising 36 values, consisting of 18 terminal values representing desired end-states and 18 instrumental values signifying means to achieve those end-states. Schwartz (1992, 1994) conducted comprehensive questionnaire surveys in 20 diverse countries known as the Schwartz Value Survey. This study identified ten values that are universally recognized, regardless of culture, language, or location. These studies have all laid a solid theoretical foundation for establishing what kind of values AI should be aligned with. However, they are constrained by the historical context of their research and may not maintain strong universality across different times and cultures. - **Cooperative AI** . Arguably, the most exciting aspect of multi-agent interaction is cooperation, and cooperation failure is the most worrying aspect of multi-agent interaction. As an example of AI cooperation failure, the _2010 Flash Crash_ led to a temporary loss of trillions of market value in 2 minutes and was caused in part by interactions between high-frequency algorithmic traders (Kirilenko et al., 2017). Therefore, there is a need to implement mechanisms ensuring cooperation in agent-like AI systems and the environments they’re operating within (Dafoe et al., 2021). The high-level design principles and low-level implementations of such mechanisms fall into the domain of _Cooperative AI_ . In addition, Cooperative AI also studies human cooperation through the lens of AI and how AI can help humans achieve cooperation. More precisely, Dafoe et al. (2020) classified Cooperative AI research into four broad topics: _Understanding_, _Communication_, _Commitment_, and _Institutions_ . They span various disciplines, from game theory to machine learning to social sciences. This survey has included discussions of cooperative AI, focusing on reinforcement learning in §3.3.2 and game theory in §4.3.1. - **Addressing Social Complexities** . The requirement of ethicality contains in itself a social component. “What is ethical” is often defined within a social context; therefore, its implementation in AI systems also needs to account for social complexities. Critch and Krueger (2020) provides proposals for many research topics 16 in this vein. One avenue of research focuses on the realistic simulation of social systems, including rulebased _agent-based modeling_ (Bonabeau, 2002; De Marchi and Page, 2014), deep learning-based simulation (Sert et al., 2020), and those incorporating LLMs (Park et al., 2023a). These simulation methods could serve a diverse array of down-stream applications, from impact assessment (Calvo et al., 2020; Fernandes et al., 2020) to multi-agent social learning (Critch and Krueger, 2020). On another front, the fields of _social choice_ (Sen, 1986; Arrow, 2012) and, relatedly, _computational social choice_ (Brandt et al., 2016) have aimed to produce mathematical and computational solutions for preference aggregation in a diverse population, among other goals. It has been argued that a similar approach when combined with human preference-based alignment methods ( _e.g._, RLHF and most other methods introduced in §2), could supplement these methods to guarantee a fair representation of everyone’s preferences (Leike, 2023b; Collective Intelligence Project, 2023). There have been early-stage experiments on this proposal (Bakker et al., 2022; Köpf et al., 2024). To complement this approach of learning values from crowds, it has also been argued that embodied values in AI systems should undergo continual progress over the long term as opposed to being permanently locked-in (Kenward and Sinclair, 2021), in order to navigate through emerging challenges, as well as to become future-proof and meet potential _unknown unknowns_ in the moral realm. **Malicious Use** Malicious actors can deliberately use AI to cause harm. Already, deepfakes have been used by criminals to enable scams and blackmail (Cao and Baptista, 2023). As AI systems develop more dangerous capabilities, the threat of misuse looms larger. Biological weapons provide one concerning example of how AI could be maliciously used to cause harm. Research has shown that large language models can provide detailed, step-by-step instructions about synthesizing pandemic potential pathogens (Soice et al., 2023). In addition to spreading information about how to create biological weapons, AI could help design new pathogens that are more lethal and transmissible than existing illnesses (Sandbrink, 2023). Terrorist groups such as Aum Shinrikyo (Danzig, 2012) have already attempted to build biological weapons in order to cause widespread destruction, and AI could make it easier for small groups to create biological weapons and start global pandemics. Other kinds of malicious use could include using AI to launch cyberattacks against critical infrastructure (Mirsky et al., 2023), or create autonomous agents that survive and spread outside of human control (Bengio, 2023). As new dangerous capabilities arise in AI systems, thorough evaluations will be required to determine how an AI system could be used to cause harm. Malicious use might not be considered a failure of alignment because when an AI system behaves according to the intentions of a malicious user, this system would be aligned with its user but would still pose a serious threat to society. Policies to ensure that AI is aligned with the public interest will be essential to avert this threat. **Collective Action Problems** Many AI developers are racing to build and deploy powerful AI systems (Grant and Weise, 2023). This incentivizes developers to neglect safety and race ahead to deploy their AI systems. Even if one developer wants to be careful and cautious, they might fear that slowing down to evaluate their systems and invest in new safety features thoroughly might allow their competition to outpace them (Armstrong et al., 2016). This creates a social dilemma where individual AI developers and institutions rationally pursuing their own interests can lead to suboptimal outcomes for everyone. Success in competition between AI systems may be governed by evolutionary dynamics, where the strongest and most self-interested AI systems could be the most likely to survive (Hendrycks, 2023). Preventing these collective action problems from causing societal catastrophes could require intervention by national and international AI policies to ensure that all AI developers uphold common safety standards. In a broader context, _Malicious Use_ can be considered effective alignment between AI systems and individuals with impure intentions, but without alignment with universally held human values. Concurrently, _Collective Action_ _Problems_ can be regarded as a consequence of competition, leading developers to neglect the crucial aspect of AI alignment in ensuring model safety. Broadly speaking, the connection between AI alignment and AI safety has progressively become more intertwined, resulting in a gradual blurring of boundaries. **2** **Learning from Feedback** Learning from feedback aims to transmit human intentions and values to AI systems. It serves as the foundation for _forward alignment_ . In this section, we focus on the dynamic process of learning from feedback, categorizing it into three key elements: (1) _AI System_ : refers to systems that require alignment, such as pre-trained LLMs; (2) _Feedback_ : provided by an advisor set, which may consist of humans, AI, or humans assisted by AI, _etc._ This serves as the information used to adjust the AI system; (3) _Proxy_ : a system developed to model feedback to facilitate more accessible learning. For example, human preference rankings of AI system behaviors serve as feedback, while a reward model acts as the corresponding proxy. From these elements, we identify two pathways by which the AI system learns from feedback: (1) Direct learning from the feedback itself and (2) Indirect learning via proxies that model the feedback. 17 2.1 Feedback Types Figure 4: Overview of the learning from the feedback process. Two learning pathways emerge: direct feedbackbased learning and proxy-mediated learning ( _e.g._, RLHF). We adopt a _human-centric_ perspective, viewing AI systems as _black boxes_ and categorizing the forms of feedback presented to AI systems into four types: Label, Reward, Demonstration, and Comparison. Following this process, we proceed to §2.1 where we discuss different feedback types from the alignment perspective, highlighting various methods of providing information to AI systems. In the following sections, we introduce key concepts that have recently provided insights into developing powerful AI systems (Christiano et al., 2017) and aligning them with human intent (Touvron et al., 2023). §2.2 focus on Preference Modeling emphasizing its role in creating proxies that help humans provide feedback to complex or hard-to-evaluate AI systems. Next, we explore Policy Learning in §2.3, focusing on key research directions for developing capable AI systems through feedback. The discussion then naturally transitions to scalable oversight in §2.4, where we reflect on the learning process and objectives from a broader alignment perspective. **2.1** **Feedback Types** Feedback is a crucial link between AI behaviors to human intentions (Stumpf et al., 2007, 2009) leveraged by AI systems to refine their objectives and more closely align with human values (Glaese et al., 2022), this includes two primary meanings: (1) During system construction, external sources provide feedback on the AI system’s output, guiding refinements to the system’s architecture or its internal information (Zhou, 2021). (2) After the system deployment, it will continuously adapt to changes in external environmental data, maintaining the architecture or fundamental strategy of the system unchanged, with methods such as adaptive control (Åström and Wittenmark, 2008; Åström and Murray, 2021) and in-context learning (Dong et al., 2022). For a precise and detailed discussion of the feedback types with precision and detail, it is essential to initially define _feedback_ within the scope of alignment. Considering diverse AI systems in alignment research, we embrace an _human-centric_ approach. Instead of delving deep into the complex system mechanics, we propose a taxonomy to classify feedback according to its _direct_ _presentation forms_ to the system. This section introduces four types of feedback employed to align AI systems commonly: label, reward, demonstration, and comparison. It is worth noting that beyond explicit feedback, there are approaches that exploit the information embedded in vast amounts of unlabeled data through unsupervised pre-training (Parisi et al., 2022) and semi-supervised learning (Xu et al., 2018b), showing considerable promise in enhancing model capabilities (Zhou et al., 2024). **Label** Label feedback refers to one or more meaningful information tags attached to the original data item (Hastie et al., 2009), which stands as the most direct form, offering explicit guidance and delineating expected outputs for 18 2.1 Feedback Types AI systems. This type of feedback prompts AI systems to learn from input-output pairings provided by expert advisors. For example, in supervised learning, an AI model is trained using a dataset of labeled input-output pairs, denoted by _D_ = {( _xi,yi_ )} _[N]_ _i_ =1 [. Here,] _[ y][i]_ [ represents the true labels corresponding to the input data] _[ x][i]_ [, and] _[ N]_ [ signifies] the total number of samples in the dataset. The essence of the learning process revolves around minimizing a loss function L ( _e.g._, MSE), which measures the disparity between the predictions of the model, _f_ ( _x_ ; _θ_ ), and the ground truth labels _y_, based on the model parameters, _θ_ . The advantage of label feedback is its unambiguous nature and simplicity in interpretation. However, due to the inability of label feedback to fully encapsulate the underlying logic of this choice, employing such feedback in model training can result in target variable bias (Guerdan et al., 2023). And, its utility might diminish when tackling complex tasks beyond mere classification or regression (Lake et al., 2017; Marcus, 2018). For example, in tasks like optimizing algorithms (Fawzi et al., 2022; Mankowitz et al., 2023), video game playing (Baker et al., 2022), and multi-modal generation (OpenAI, 2023b), it is not only impractical to provide explicit instructions for every conceivable situation but also insufficient to solely rely on label feedback to build systems that surpass human capabilities. **Reward** A reward is an absolute evaluation of a single output from an AI system, represented as a scalar score (Silver et al., 2021) or a vector of scores (Wu et al., 2024), each independent of other outputs. Feedback based on rewards provides a quantified evaluation of the AI system, allowing for direct guidance in behavior adjustments. This type of feedback typically originates from pre-designed, rule-based functions or procedures. For example, in the MuJoCo simulation, environments from OpenAI Gym (Brockman et al., 2016), the task is to guide the agent moving forward effectively. To this end, an effective rule-based reward function can be formulated as a composite of several key components: maintaining a healthy status, encouraging forward movement, minimizing control exertion, and regulating contact intensity. The advantage of reward feedback is that the designer does not need to delineate the optimal behavior while allowing the AI system to explore to find the optimal policy (Kaelbling et al., 1996; Mnih et al., 2015; Silver et al., 2016, 2017). However, crafting flawless rules to determine scores for functions that evaluate the output of AI systems (Everitt et al., 2017; Victoria et al., 2020; Pan et al., 2021) or directly assigning calibrated and consistent scores to each AI system output (Isbell et al., 2001; Thomaz and Breazeal, 2008; Christiano et al., 2017; Casper et al., 2023b) is challenging for human. This is due to the inherent complexity of the tasks, where it’s impractical to account for every nuance. Additionally, flawed or incomplete reward functions can lead to dangerous behaviors misaligned with the intention of the designer, such as negative side effects and reward hacking (Hadfield-Menell et al., 2017b; Skalse et al., 2022). Thus, merely from the alignment perspective, perhaps the most important limitation of feedback based on rewards is that it may be difficult to rule out manipulation (Shevlane et al., 2023), which amounts to reward tampering and reward gaming (Leike et al., 2018; Everitt et al., 2021; Skalse et al., 2022) in this context. CIRL in §2.4.5, provides insights into this particular issue. **Demonstration** Demonstration feedback is the behavioral data recorded from expert advisors while achieving a specific objective (Hussein et al., 2017). Demonstrations can take on various forms, including videos (Shaw et al., 2023), wearable device demonstrations (Edmonds et al., 2017; Wang et al., 2023a), collaborative demonstrations (Bozorgi and Ngo, 2023), and teleoperation (Zhang et al., 2018d). If the dynamics of the demonstrator and the AI learner are identical, the demonstration can directly constitute a trajectory made up of state-action pairs (Zhang et al., 2023b). These state-action pairs can also be partially observable (Torabi et al., 2018; Brown et al., 2019). For example, a video can be recorded of a human expert performing a robotic manipulation task, such as grasping an object with a robotic hand. One can subsequently annotate each video frame with the associated robot state (Shaw et al., 2023) and action (Baker et al., 2022) for each frame. This results in a dataset of state-action pairs from the human demonstration that can be used to train the agent’s policy to imitate the expert behavior. This feedback leverages the expertise and experience of advisors directly, obviating the need for formalized knowledge representations (Fang et al., 2019; Dasari et al., 2023). However, it may falter when confronting tasks that exceed the advisors’ realm of expertise (Hussein et al., 2017). Additionally, it faces challenges stemming from the noise (Sasaki and Yamashina, 2020) and suboptimality (Attia and Dayan, 2018) in real-world advisor demonstrations (Yang et al., 2021). Furthermore, human advisors, prone to imprecision and errors, can introduce inconsistencies (Zhu et al., 2019; Hejna III and Sadigh, 2022). Meanwhile, there might be a need for a vast amount (Sasaki and Yamashina, 2020) and diverse set (Beliaev et al., 2022) of demonstrations within acceptable costs, which results in significant difficulty in learning reliable behaviors. **Comparison** Comparison feedback is a relative evaluation that ranks a set of outputs from an AI system and guides the system toward more informed decisions (Wirth et al., 2017). For example, this feedback form is manifested in Preference Learning (Fürnkranz and Hüllermeier, 2010), where the AI system discerns the preferences of advisors by comparing multiple examples. 19 2.1 Feedback Types The fundamental advantage of comparison feedback is humans’ capacity to quickly handle tasks and objectives that are hard for precise evaluation (Hüllermeier et al., 2008; Christiano et al., 2017; Ouyang et al., 2022). Nevertheless, beyond common factors like noise in the feedback and unmodeled contextual elements that hinder the model’s convergence to true objectives, the absolute differences between different items become obscured. Consequently, the performance of a strategy tends to optimize towards a median target rather than an average target. Casper et al. (2023b) illustrates this with an example of action _A_, always yielding a value of 1, and action _B_, which yields 10 in 40% of cases and 0 in 60%. When assessed based on comparison feedback, action _A_ is deemed superior to _B_, even though _B_ possesses a higher expected return. It also has the inherent limitation of potentially requiring a substantial amount of comparative data (Fürnkranz and Hüllermeier, 2003; Gao et al., 2023), although some studies indicate that the necessary quantity may be relatively smaller (Christiano et al., 2017). Preference modeling is an example of using this type of feedback, as detailed in §2.2. **Discussion** All types of feedback can be provided to AI systems interactively and online. This process engenders synchronous iterations between providing feedback and AI system updates, underscoring rapid, focused, and incremental model modifications (Amershi et al., 2014; Holzinger, 2016). For instance, demonstration feedback can manifest in the form of online corrections (Bajcsy et al., 2018; Li et al., 2021b; Losey et al., 2022). Interactively providing feedback emphasizes the role of interactivity in the learning process, allowing AI systems to evolve based on interactive experiences. In active learning, robots actively engage in data discovery and acquisition, thereby facilitating learning throughout the process of online deployment (Taylor et al., 2021). And in interactive learning, feedback manifests in the form of guided corrections that online rectify missteps in the behavior of the AI system (Fails and Olsen Jr, 2003; Amershi et al., 2014; Saunders et al., 2022). For example, the interactive image segmentation emphasizes simple (Zhang et al., 2020a), intuitive (Rother et al., 2004; Xu et al., 2016), and real-time (Liu et al., 2022) interactions. One of the essential advantages of interactively providing feedback is its ability to fine-tune AI systems in realtime, allowing users to interactively explore the model’s space (Amershi et al., 2014) to ensure quick and subtle alignment with the directives of advisors (Shin et al., 2020; Wei et al., 2022; Zou et al., 2024b). Moreover, this process lessens the dependence on specialist knowledge and promotes better interpretability (Berg et al., 2019). However, it may be limited by the interactivity to choose time-intensive algorithms (Fails and Olsen Jr, 2003; Holzinger, 2016). Furthermore, considering more powerful AI systems are emerging, more universal interaction interfaces are also coming up, such as language (Lynch et al., 2023; OpenAI, 2023a) and vision (Yevgen Chebotar, 2023), which bridge the communication gap between humans and AI systems. In robotics, a series of studies have linked humanprovided language with rewards obtained by agents. This association enables the conveyance of nuanced human intentions through language, thereby guiding the generation of scalar feedback signals during the training (Fu et al., 2019; Goyal et al., 2019; Sumers et al., 2021; Zhou and Small, 2021; Lin et al., 2022b; Yu et al., 2023) and planning (Sharma et al., 2022) process. In the realm of LLMs, in-context learning (Dong et al., 2022) serves as a means to supplement information via language during deployment, thereby enhancing the alignment of LLMs with human intent. These various modes of feedback share a common trait – that they can all be seen as attempts by humans to convey a hidden reward function. Jeon et al. (2020) proposes and formalizes this position and unifies a wide array of feedback types by defining a parameterized reward function Ψ (·; _θ_ ) that underlies the feedback process. This allows the AI system to, for example, perform Bayesian inference on _θ_, regardless of the feedback type. Recently, techniques based on IL and RL have successfully constructed AI systems with significant capabilities (Baker et al., 2022; OpenAI, 2023b). However, this success naturally leads to two questions: - How can we define reward functions for more complex behaviors ( _e.g._, various sub-tasks in interactive dialogue), aiming to guide the learning process of AI systems? - How can we express human values such that powerful AI systems align better with humans, ensuring the system’s _controllability_ and _ethicality_ ? Endeavors incorporating preference modeling into policy learning have shown progress. The most notable achievements in this domain have been observed in constructing powerful LLMs (OpenAI, 2023a; Touvron et al., 2023; Anthropic, 2023c). Additionally, a series of policy learning studies have reported performance improvements. For instance, combining preference modeling with Inverse Reinforcement Learning (IRL) (Brown et al., 2019, 2020a) and offline RL (Shin et al., 2023), fine-tuning reward functions (Hejna III and Sadigh, 2022), modeling non-Markovian rewards (Kim et al., 2023), and aiding in the construction of intricate reward functions (Bukharin et al., 2023). Therefore, we consider preference modeling (as shown in §2.2) and policy learning (as shown in §2.3) as fundamental contexts for understanding the challenges faced in alignment and potential solutions. Next, we provide a brief overview of these specific techniques related to alignment. 20 2.2 Preference Modeling Table 2: A comparison of the three types of preference granularity in the context of sequential decision-making. Each type is defined according to its characteristics and the way it compares different elements of the learning process. The notation _i_ 1 ≻ _i_ 2 denotes that _i_ 1 is strictly preferred over _i_ 2. **Preference Granularity** **Definition** **Action** Compares two actions _a_ 1 and _a_ 2 within the same state _s_, denoted as _a_ 1 ≻ _s a_ 2. **State** Compares two states _s_ 1 and _s_ 2, denoted as _s_ 1 ≻ _s_ 2. **Trajectory** Compares two complete state-action sequence trajectories, denoted as _τ_ 1 ≻ _τ_ 2. Each trajectory _τ_ consists of state-action pairs at time _t_, expressed as _τ_ = { _s_ 0 _,a_ 0 _,s_ 1 _,a_ 1 _,...,sT_ −1 _,aT_ −1 _,sT_ }. **2.2** **Preference Modeling** In many complex tasks, such as dialogues (Ouyang et al., 2022), constructing precise rule-based rewards presents a challenge (Bender et al., 2021). At the same time, methods based on demonstration might require a substantial investment of expert human resources, resulting in high costs. Currently, preference modeling based on comparison feedback (Akrour et al., 2011) has emerged as a very promising method (Ouyang et al., 2022; OpenAI, 2023a; Touvron et al., 2023) to assist in fine-tuning powerful AI systems (Amodei et al., 2016). Typically, it is necessary to iteratively explore the system dynamics while acquiring expert preference data to gain more knowledge about the optimization objectives. This process is known as _Preference Elicitation_ (Wirth and Fürnkranz, 2013; Wirth et al., 2017; Christiano et al., 2017; Cabi et al., 2020), which is crucial for obtaining rich, valuable feedback related to AI system outputs, thus guiding the alignment process (Hejna III and Sadigh, 2022). Within _Preference Elicitation_, two core decisions that need to be determined are the _Granularity of Preference_ and the _Category of Preference_ . This paper introduces these within sequential decision-making problems, but the insights derived apply to a broad array of AI systems (Amodei et al., 2016; Christiano et al., 2018; Leike et al., 2018). **Granularity of Preference** Preference (Wirth et al., 2017) can primarily be categorized into three types by granularity: _Action_, _State_, and _Trajectory_ (as shown in Table 2). The _Action_ preference focuses on comparing actions within a particular state, specifying the preferred action under specific conditions. When translated into trajectory preferences, it may impose challenges such as evaluators’ expertise needs and potential information loss. The _State_ preference deals with comparing states. It encapsulates preference relations among states but requires assumptions about state reachability and independence when translating to trajectory preferences. The _Trajectory_ preference considers whole state-action sequences, offering more comprehensive strategic information. It inherently assesses long-term utility and depends less on expert judgment. Christiano et al. (2017) demonstrates, using ablation studies, that in the settings that they studied, longer trajectory segments yield more informative comparisons on a per-segment basis. Such segments are also more consistently evaluated by humans in MuJoCo tasks. **Category of Preference** Diverse objectives exist within preference modeling. Based on their targets, preferences can be categorized into object preference and label preference (Fürnkranz and Hüllermeier, 2010). Specifically, object preference operates on a set of labels for each instance, whereas label preference acts on a set of objects themselves. One can further classify them differently based on the form of preferences. - **Absolute Preferences** . Absolute preferences independently articulate each item’s degree of preference. **– Binary** . Classifying items as liked or disliked offers a simplistic and straightforward model of user preference (Tsoumakas and Katakis, 2007; Cheng et al., 2010a). **– Gradual** . This can be further distinguished between numeric and ordinal preferences. Numeric preferences employ absolute numerical values, such that each item receives a numerical score, which reflects the extent of preference (Cheng et al., 2010b). On the other hand, ordinal preferences entail a graded assessment of a fixed set of items as either preferred, less preferred, or intermediary, _etc._, enabling the depiction of user preferences without including specific numerical measurements (Cheng et al., 2010a). - **Relative Preferences** . Relative preferences define the preference relation between items. **– Total Order** . This form establishes a comprehensive preference relation covering all item pairs, asserting an absolute ordering of preferences ranging from the most preferred to the least (Hüllermeier et al., 2008). **– Partial Order** . Because users may not exhibit a distinct preference between two items in some instances (Cheng et al., 2010c), this allows for incomparable item pairs. 21 2.3 Policy Learning **Reward Model** Reward modeling transfers comparison feedback (Fürnkranz and Hüllermeier, 2010; Wirth et al., 2017) to the scalar reward form, facilitating policy learning (Christiano et al., 2017; Cabi et al., 2020; Touvron et al., 2023). Given pairs of actions ( _y_ 1 _,y_ 2) performed by the RL agent in the same state. The preference is denoted as _yw_ ≻ _yl_ | _x_, where _yw_, _yl_ represents the preferred and less preferred action respectively among ( _y_ 1 _,y_ 2). We assume these preferences emerge from a latent reward model _r_ [∗] ( _x,y_ ), which we lack direct access to. Several methods exist to model such preferences, _e.g._, the Bradly-Terry Model (Bradley and Terry, 1952), Palckett-Luce ranking model (Plackett, 1975), _etc._ Under the BT model, the distribution of human preference, denoted as _p_ [∗], can be formalized as, exp - _r_ [∗] ( _x,y_ 1)� - _p_ [∗] ( _y_ 1 ≻ _y_ 2 | _x_ ) = exp ~~�~~ _r_ [∗] ( _x,y_ 1) ~~�~~ + exp ~~�~~ _r_ [∗] ( _x,y_ 2) ~~�~~ = _σ_ _r_ [∗] ( _x,y_ 1) − _r_ [∗] ( _x,y_ 2) _._ where _σ_ ( _x_ ) = 1 _/_ (1 + exp(− _x_ )) is the logistic sigmoid function. Subsequently, we use the derived preference rankings to train the parameterized reward model, optimizing its parameters through maximum likelihood. - - - - [��] LR ( _θ_ ) = −E( _x,yw,yl_ )∼D log _σ_ _rθ_ ( _x,yw_ ) − _rθ_ ( _x,yl_ ) In this negative log-likelihood loss, the problem is a binary classification task, where D signifies the static dataset - - _N_ _x_ [(] _[i]_ [)] _,yw_ [(] _[i]_ [)] _[,y]_ _l_ [(] _[i]_ [)] sampled from _p_ [∗] ( _i.e._, human-labeled comparisons). _i_ =1 Reward models enable human users to impart specific preferences to these systems via evaluations, thereby circumventing the complex task of defining objectives explicitly. Initially, the studies by Knox (2012); Knox and Stone (2013) distinctively treat human reward as separate from the traditional rewards of MDP and conduct a reward modeling process around it. Transitioning from these simpler cases, Christiano et al. (2017) propose that utilizing supervised learning to construct a distinct reward model asynchronously can substantially diminish interaction complexity by approximately three orders of magnitude. The study conducted by Ibarz et al. (2018) integrates expert demonstrations with human preferences, such that the policy initially mimics expert demonstrations and then sequentially collects human trajectory annotations, trains the reward model, and updates the policy. This research also provides practical insights for precluding the overfitting of the reward model and the occurrence of _reward hacking_ - a scenario where escalating rewards do not translate to improved performance, especially when the policy is excessively trained. Additionally, a random policy might rarely exhibit meaningful behavior for tasks that surpass the complexity of Atari (Palan et al., 2019; Jeon et al., 2020). This implies that for effective annotation, the policy itself must possess certain capabilities to perform improved behavior. Offline settings also benefited from the reward model. Cabi et al. (2020) proposes reward sketching to efficiently learn a reward model that leverages humans’ episodic judgments for automated reward annotation of historical data, enabling large-scale batch RL. Qiu et al. (2024) provides an empirically-grounded theory of reward generalization in RMs, based on which a new type of RM based on tree-structured preferences is proposed and experimentally validated. Importantly, the reward model provides an essential tool for aligning powerful LLMs. Stiennon et al. (2020) employs reward models grounded in human preferences for text summarization tasks, resulting in significant policy enhancements. This work also delves into the issues of distribution shift and reward model generalization, revealing that the effectiveness of the reward model correlates with data scale and parameter size. Building upon this work, InstructGPT (Ouyang et al., 2022) extends the reward model paradigm to broader dialogue task reward modeling and introduces a preference-optimizing loss function for multiple responses to mitigate overfitting. Furthermore, this research reveals that the preferences derived from the reward model can be generalized across different groups. - _N_ sampled from _p_ [∗] ( _i.e._, human-labeled comparisons). _i_ =1 **2.3** **Policy Learning** Policy learning aims to learn the mapping from perceived states to actions taken when in those states (Sutton and Barto, 2018) to optimize a model’s performance in specific tasks. Numerous alignment-related challenges manifest within policy learning (as shown in §1.1.2). Consequently, policy learning provides a crucial backdrop for alignment, and its techniques can further advance alignment objectives (Amodei et al., 2016; Christiano et al., 2018; Ibarz et al., 2018). This section discusses various domains within policy learning and then introduces RLHF, a powerful technique for policy learning (OpenAI, 2023a; Touvron et al., 2023). **2.3.1** **Background** We introduce some general areas of policy learning here to give readers a general background. **Reinforcement Learning (RL)** RL enables agents to learn optimal policies by trial and error via interacting with the environment (Sutton and Barto, 2018). This paradigm has achieved great success in tackling complex tasks (Agostinelli et al., 2018; Yu et al., 2021; Fawzi et al., 2022; Baker et al., 2022; Afsar et al., 2022; Mankowitz et al., 22 2.3 Policy Learning 2023; OpenAI, 2023b), demonstrating its potential for decision-making and control in complex state spaces. The goal of RL is to learn a policy _π_ which executes actions _a_ in states _s_ to maximize the expected cumulative reward under environment transition dynamics _P_ and the initial state distribution _ρ_ 0:   ∞ _γ_ _[t]_ _r_ ( _st_ ) _t_ =0 _,_ where _s_ 0 ∼ _ρ_ 0(·) _, at_ ∼ _π_ (·| _st_ ) _, st_ +1 ∼ _P_ (·| _st,at_ ) _._  _π_ [∗] = argmax _π_ E _s_ 0 _,a_ 0 _,..._    Even though RL still faces challenges like sample efficiency and stability (Bu¸soniu et al., 2018). Proximal policy optimization (PPO) (Schulman et al., 2017) is an influential algorithm in the RL community, serving as the key algorithm for RLHF (Ouyang et al., 2022). The key idea of PPO is to limit the policy update to prevent significant deviations from the original policy by introducing a proximity objective. Sikchi et al. (2023) unifies several RL and Imitation Learning (IL) algorithms under the framework of dual RL through the lens of Lagrangian duality. **Preference-based Reinforcement Learning (PbRL)** PbRL (Wirth et al., 2017) seeks to facilitate training RL agents using preference feedback instead of explicit reward signals (Christiano et al., 2017; Sadigh et al., 2017). [26] PbRL integrates the advantages of preference learning and RL, broadening the application range of RL and mitigating the difficulties associated with reward function formulation, and has been efficaciously deployed in a variety of tasks such as robotic instruction (Kupcsik et al., 2013), path planning (Jain et al., 2013), and manipulation (Shevlane et al., 2023). In PbRL, the emphasis predominantly lies on trajectory preferences ( _i.e._, comparisons of state-action sequences segment) (Wirth et al., 2017). Such trajectory preferences encapsulate a human evaluation of various behavioral outcomes rather than single states, rendering PbRL more suitable for non-expert users (Christiano et al., 2017; Shin et al., 2023; Kim et al., 2023). A general example of PbRL is the _weighted pairwise_ _disagreement loss_ (Duchi et al., 2010) balancing multiple potentially conflicting preferences to identify a singular optimal policy: L( _π,ζ_ ) = _N_ _αiL_ ( _π,ζi_ ) _,_ _i_ =1 where L( _π,ζ_ ) is the aggregated loss for policy _π_ over all preferences _ζ_, _αi_ is the weight of the _i_ th preference, and _L_ ( _π,ζi_ ) is the loss associated with the policy _π_ in relation to the specific preference _ζi_ . Compared to exact numerical rewards, preference feedback has several benefits (Wirth et al., 2017), such as (1) circumventing arbitrary reward design, reward shaping, reward engineering, or predefined objective trade-offs, (2) diminishing reliance on expert knowledge, and (3) decoupling training loop with human by modeling preferences (Akrour et al., 2012). However, PbRL also faces challenges, including credit assignment problems due to temporal delays, practical exploration of preference space (Wirth et al., 2017), the potential need for massive data (Ouyang et al., 2022), and the inability to use the learned preference model for retraining (McKinney et al., 2022). **Imitation Learning (IL)** IL (Schaal, 1999; Syed et al., 2008), also referred to as learning from demonstration or apprenticeship learning, focuses on emulating human behaviors within specific tasks. The agent learns a mapping between observations and actions and refines its policy by observing demonstrations in a collection of teacher demonstration data D (Bakker et al., 1996; Hussein et al., 2017). This process obviates the need for environmental reward signals (Hussein et al., 2017). Broad IL (Cotra, 2018) aims to replicate human desires and intentions, effectively creating replicas of human decision-making processes. This concept is central to technologies such as Iterated Distillation and Amplification (IDA, as shown in §2.4.2) (Christiano et al., 2018). On the other hand, Narrow IL aims to replicate specific human behaviors within given tasks. Behavioral cloning (BC) (Bain and Sammut, 1995; Ross et al., 2011; Osa et al., 2018) is a simple (Pomerleau, 1991; Ravichandar et al., 2020) strategy that learns directly from demonstrations using supervised learning (Schaal, 1996). BC method specifically seeks to optimize the policy parameters, _φ_, with the objective of aligning the policy _πφ_ ( _a_ | _s_ ) closely with the expert policy _πE_ ( _a_ | _s_ ). This alignment is achieved through the minimization of the negative log-likelihood, as delineated in the following (Lynch et al., 2020): - LBC( _φ_ ) = −E( _s,a_ )∼ _πE_ log _πφ_ ( _a_ | _s_ ) _._ Here, the expectation is computed over state-action pairs sampled from the expert policy, _πE_ . However, it faces the Out-of-Distribution (OOD) problem, arising from the difference between the training and testing distributions (Ross et al., 2011; Ho and Ermon, 2016; Reddy et al., 2019; Zhou et al., 2022). Adversarial imitation learning 26Notably, Sadigh et al. (2017) explicitly maintains a probabilistic belief over the true reward function during learning, and actively constructs queries to the human to reduce uncertainty maximally. Both traits are in a similar spirit to _cooperative_ _inverse reinforcement learning_ (CIRL), and later work also continues this theme (Reddy et al., 2020). See §2.4.5 for more. 23 2.3 Policy Learning methods (Ho and Ermon, 2016; Fu et al., 2018a; Lee et al., 2019; Ghasemipour et al., 2020) have demonstrated an ability to enhance the robustness of policies against distribution shifts. However, these methods learn nonstationary rewards, which cannot be used to train new policies (Ni et al., 2021). **Inverse Reinforcement Learning (IRL)** Unlike the paradigm of IL, IRL (Adams et al., 2022) focuses on deriving a reward function from observed behavior (Ng et al., 2000; Arora and Doshi, 2021). Standard IRL methods include the feature matching methods (Abbeel and Ng, 2004), which assumes optimal expert behavior or decision processes, as well as the maximum entropy methods (Ziebart et al., 2008) and the Bayesian methods (Ramachandran and Amir, 2007), both of which do not require optimal behavior. IRL guarantees robustness to changes in the state distribution but at the cost of increased computational complexity due to the extra RL step (Ho and Ermon, 2016; Fu et al., 2018b). This interaction, meanwhile, introduces inherent RL challenges, _e.g._, sample efficiency (Yu, 2018) and potential dangers in environment interaction (Garcıa and Fernández, 2015). Additionally, identifying the reward function remains a challenge (Kim et al., 2021). **2.3.2** **Reinforcement Learning from Human Feedback (RLHF)** RLHF expands upon PbRL within the domain of DRL (Christiano et al., 2017), aiming to more closely align complex AI systems with human preferences (OpenAI, 2023b). Its principal advantage is that it capitalizes on humans being better at judging appropriate behavior than giving demonstrations or manually setting rewards. This approach has gained significant traction, particularly in fine-tuning LLMs (Ouyang et al., 2022; OpenAI, 2023a; Touvron et al., 2023). Nonetheless, RLHF encounters obstacles (Casper et al., 2023b), including data quality concerns, the risk of reward misgeneralization, reward hacking, and complications in policy optimization. Specifically, RLHF can also be viewed as a Recursive Reward Modeling (RRM) process (as shown in §2.4.3) without deep recursive modeling (Leike et al., 2018). Here, we provide a brief review of the RLHF methodology. The genesis of RLHF can be traced back to Knox and Stone (2008, 2012), subsequently broadening its reach to domains such as social robots (Knox et al., 2013) and human-AI cooperative learning (Griffith et al., 2013). Besides focusing on the association between feedback and policy, Loftin et al. (2016) models the connection between feedback and the trainer strategy. Christiano et al. (2017) extended RLHF to simulated robotic tasks, demonstrating its potential effectiveness. It’s worth noting that one of the significant applications of RLHF has been in the field of LLMs. Some work found that LLMs trained with RLHF (Ouyang et al., 2022; Korbak et al., 2023; Christiano, 2023) are more creative and human alignment compared to models trained via supervised or self-supervised learning approaches (Kenton and Toutanova, 2019; Brown et al., 2020b). The importance of RLHF is not merely limited to allowing LLMs to follow human directives (Ouyang et al., 2022). It helps LLMs better align by giving them important qualities like being helpful, harmless, and honest (Bai et al., 2022a). Due to these improvements, many works use RLHF for aligning LLMs (Ziegler et al., 2019; Stiennon et al., 2020; Bai et al., 2022a; Glaese et al., 2022; OpenAI, 2023a; Touvron et al., 2023). Additionally, Dai et al. (2024b) integrates the Safe RL (Garcıa and Fernández, 2015) framework with the RLHF, addressing the inherent tension between aligning helpfulness and harmfulness (Bai et al., 2022a). Future efforts can be focused on reducing dependence on human annotation (Wang et al., 2023c; Sun et al., 2024) and improving the efficacy of the reward model by leveraging iterative RLHF methods ( _i.e._, integrating it with debate frameworks (Irving et al., 2018)), _etc._ Qiu et al. (2024) has also built a formal framework of the RLHF process portraying it as an autoencoding process over text distributions, and enables analysis of convergence properties in RLHF. We review the RLHF pipeline from the Ziegler et al. (2019); Ouyang et al. (2022); Rafailov et al. (2024) to give a general framework. It usually consists of three stages: - **Supervised Fine-tuning (SFT)** . RLHF usually starts with a pre-trained language model, then fine-tuned using supervised learning – specifically, maximum likelihood estimation – on a high-quality human instruction dataset tailored for downstream tasks to obtain a model _π_ [SFT] . Examples of these tasks include dialogue handling, instruction following, and summarization (Some open-source datasets include Alpaca Data (52k instruction-following data) (Taori et al., 2023), Vicuna (70K user-shared ChatGPT conversations) (Chiang et al., 2023), _etc._ ). This stage can also be carried out at any other stage. - **Collecting Comparison Data and Reward Modeling** . This phase includes collecting comparison data, which is subsequently used to train a reward model. The SFT model is given prompts denoted as _x_ to generate pairs of responses ( _y_ 1 _,y_ 2) sampled from _π_ [SFT] ( _y_ | _x_ ). These pairs are subsequently shown to human annotators, who indicate a preference for one of the responses. Then as discussed in §2.2, comparison data is used to construct the reward model _rθ_ . - **Policy Optimization via Reinforcement Learning** . The final step is optimizing LLM as a policy _π_ through RL, guided by the reward model _rθ_ . The process of LLMs generating responses from prompts is modeled as 24 2.3 Policy Learning a bandit environment (Ouyang et al., 2022), where a reward is obtained from reward model _rθ_ at the end of each response. The primary objective of RL is to adjust the parameters _φ_ of the LLMs such that the expected reward on training prompt dataset DRL is maximized: - argmax E _x_ ∼DRL _,y_ ∼ _πφ_ _rθ_ ( _x,y_ ) _._ _πφ_ Typically, an additional per-token KL penalty derived from the SFT model _π_ [SFT] is involved to mitigate the reward over-optimization. In addition, the integration of gradients from pre-training distribution Dpretrain helps maintain model performance, denoted as PTX loss in (Ouyang et al., 2022). As a result, a more comprehensive practical objective function is introduced: J ( _φ_ ) = E _x_ ∼DRL _,y_ ∼ _πφ_ - - - [�] _rθ_ ( _x,y_ ) − _β_ log _πφ_ ( _y_ | _x_ ) _/π_ [SFT] ( _y_ | _x_ ) + _η_ E( _x,y_ )∼Dpretrain - - - [�] log _πφ_ ( _y_ | _x_ ) _,_ where _β_ and _η_ are coefficients determining the intensity of the KL penalty and the mixture of pretraining gradients respectively. This process refines the LLMs to generate responses that better align with human preferences for the prompts used during training. Though RLHF has been proven effective for aligning LLMs with human preferences, this method has problems like complex implementation, hyper-parameter tuning, sample efficiency (Choshen et al., 2019), and computational overhead (Yuan et al., 2024), making it hard to scale up. A straightforward approach is rejection sampling (Dong et al., 2023; Touvron et al., 2023) paired with finetuning on the best examples. For every prompt, _K_ responses are sampled from the model. Each response is then assessed with the reward model, and the one with the highest reward is selected as the best response. This selected response is later used for model fine-tuning. Zhang et al. (2023a) formulates the language model instruction alignment problem as a goal-reaching reinforcement learning problem and proposes the HIR algorithm. The method unfolds in two stages: online sampling and offline training. During online sampling, the algorithm samples the LLM at a high temperature. In the offline training stage, instructions are relabeled based on generated outputs, followed by supervised learning using this relabeled data. HIR capitalizes on successful and failed cases without requiring additional parameters. RRHF, as introduced by (Yuan et al., 2024), aligns model probabilities with human preferences by scoring and ranking responses from multiple sources. With the necessity for only 1 or 2 models, its implementation is straightforward. RRHF reported it can effectively align language models with human preferences, producing performance on par with PPO. Gulcehre et al. (2023) proposes the ReST algorithm, which contains two loops: _Grow_ and _Improve_ . The _Grow_ loop uses the current model to sample and generate a dataset, while the _Improve_ loop iteratively trains the model on a fixed dataset. This algorithm provides a simple and efficient framework that allows repeated use of the fixed dataset to improve computational efficiency, showing significant improvement in the reward model scores and translation quality compared to supervised learning baselines. Motivated by the dependence of reward modeling on policy optimization in RLHF, Chakraborty et al. (2024) propose PARL, a bilevel optimization-based framework. Rafailov et al. (2024) introduces the DPO, which demonstrates a mapping between reward functions and optimal policies. DPO is both simple and efficient, optimizing language models directly from human preference data, eliminating the need for an explicit reward model and multi-stage training. Moreover, Wang et al. (2024) discusses how diverse divergence constraints influence DPO and introduces a generalized approach, namely, _f_ -DPO. Azar et al. (2023) presents a general objective, Ψ PO, designed for learning from pairwise human preferences, circumventing current methods’ assumption: _pairwise preferences can be substituted with pointwise rewards_ . This objective analyzes RLHF and DPO behaviors, revealing their potential overfitting issue. The authors further delve into a specific instance of Ψ PO by setting Ψ as the Identity, aiming to mitigate the overfitting problems. They call this method IPO and furnish empirical results contrasting IPO with DPO. Hejna et al. (2024) introduces CPL, which utilizes a regret-based model of preferences that directly provides information about the optimal policy. Further research could explore why RLHF performs effectively with LLMs and the application of RLHF in multimodal (Yevgen Chebotar, 2023; OpenAI, 2023b) settings to facilitate the benefits of human-AI collaboration (Carlson and Demiris, 2010; Wu et al., 2021; Bi et al., 2021). See also Casper et al. (2023b) who offer a survey of open problems with RLHF. **Open Discussion** RLHF is frequently applied to the Safety Alignment of LLMs, yet many pressing issues remain unresolved. For example, how can we balance harmlessness and helpfulness in alignment? Dai et al. (2024b) attempt to integrate the SafeRL framework, specifically the cost model and reward model, into RLHF to address the inherent tension between these two indicators. Moreover, even without malicious intent, simply fine-tuning on benign and commonly used datasets can inadvertently reduce the safety alignment of LLMs, albeit to a lesser 25 2.4 Scalable Oversight: Path towards Superalignment extent (Qi et al., 2024) and fine-tuning on benign data is more likely to degrade the model’s safety (He et al., 2024). These findings suggest that fine-tuning aligned LLMs may introduce new safety risks, even with datasets that are considered absolutely safe. Generally, language models may exhibit _elasticity_, making them resistant to alignment efforts (Ji et al., 2024c). This raises a question: _how can we maintain impeccable safety alignment of models, even_ _after further fine-tuning?_ Human preferences can vary among individuals, groups, and societies, leading to divergent perspectives. This divergence is also evident when collecting preference data from annotators. To address this, Findeis et al. (2024) proposed a method to extract the underlying constitution governing the generation of a given dataset of preferences. Similar to Constitutional AI (Bai et al., 2022b), where a preference dataset is generated by an LLM based on a predefined constitution, _Inverse Constitutional AI_ aims to extract such a constitution that can be used to reconstruct the preference dataset. This problem can be formulated as an optimization problem: argmax{agreement( _po,p_ ( _c_ )) s.t. | _c_ | ≤ _n_ } _,_ _c_ where _po_ represents the original preferences, and _p_ ( _c_ ) are the constitutional preferences over a pairwise text corpus _T_, generated by an LLM _M_ using the constitution _c_ . The set is constrained to a maximum of _n_ natural language principles that are human-readable. Agreement is defined as the percentage of constitutional preferences _p_ ( _c_ ) that match the original preferences _po_ . Overall, the elicitation of a constitution can be seen as a compression task, where a constitution is generated based on a dataset and then used to reconstruct the preferences in the dataset as accurately as possible. To elicit such a constitution, the authors propose an algorithm that generates principles capable of explaining the preference data, followed by semantic clustering of these principles. To reduce the size of the set, they then subsample the principles and evaluate their ability by testing their reproducibility in reconstructing the preference data. Finally, the principles are filtered based on their relevance to the preference data. This method can be used to infer the constitution underlying a specific preference dataset and has the potential to identify underlying biases or reuse the constitution to generate new data, thus enlarging existing datasets or creating new datasets tailored to individual preferences. **2.4** **Scalable Oversight: Path towards Superalignment** Statistical learning usually rely on certain assumptions about data distribution, such as independence and identical distribution. Consequently, these algorithms fail in some situations, especially under specific distributions (Zhou et al., 2022). Challenges in elementary systems can be promptly identified through visual inspection (Christiano et al., 2018; Ngo et al., 2024). As AI systems become more powerful, insufficiently capturing the training signal or erroneous design of loss functions often leads to catastrophic behaviors (Russell et al., 2015; Hubinger et al., 2019c; Cotra, 2021) such as deceiving humans by obfuscating discrepancies (Russell, 2019), specification gaming (Victoria et al., 2020), reward hacking (Brown et al., 2020a), and power-seeking dynamics (Carlsmith, 2022). From a human perspective, these imply gaps between the optimized objectives of AI systems and the ideal goals in our minds. Thus, the issue of providing effective oversight in various decision-making becomes pivotal (Bowman et al., 2022; Li et al., 2023a), often termed as _scalable oversight_ (Amodei et al., 2016) arising from two practical challenges. - The high cost of humans frequently evaluating AI system behavior. For instance, the training process is timeconsuming, and incorporating humans directly into the training loop in real-time would significantly waste human resources and impede training efficiency (Christiano et al., 2017). - The inherent complexity of AI system behaviors makes evaluation difficult, especially on hard-to-comprehend and high-stakes tasks (Saunders et al., 2022), _e.g._, tasks such as teaching an AI system to summarize books (Wu et al., 2021), generate complex pieces of code (Pearce et al., 2022), and predict future weather changes (Bi et al., 2023). In this context, our primary focus is to present some promising directions that may have not yet been implemented generally for constructing scalable oversight (Amodei et al., 2016; Leike et al., 2018). **2.4.1** **From RLHF to RL** _**x**_ **F** The RLHF paradigm offers a framework for aligning complex systems (OpenAI, 2023a; Touvron et al., 2023). However, it encounters obstacles such as the inaccuracy of human evaluations and their associated high costs 26 2.4 Scalable Oversight: Path towards Superalignment Figure 5: A tree diagram summarizing the key concepts and literature related to Scalable Oversight. The root node represents Scalable Oversight whose goal is _ensuring AI systems remain aligned with human intent even as they_ _surpass human capabilities_ . The main branches represent promising frameworks such as Reinforcement Learning from Feedback (RLxF), Iterated Distillation and Amplification (IDA), Recursive Reward Modeling (RRM), Debate, and Cooperative Inverse Reinforcement Learning (CIRL). Further sub-branches list key works exploring each framework. This diagram provides an overview of research directions for constructing effective and safe oversight mechanisms as AI systems grow more complex. (Christiano et al., 2017; Casper et al., 2023b; Perez et al., 2023). A key limitation is the difficulty in utilizing RLHF to extend human feedback when creating AI systems with superhuman abilities (Wu et al., 2021). Building on the RLHF paradigm, we introduce _RLxF_ as a fundamental framework for scalable oversight, aiming to enhance feedback efficiency and quality and expand human feedback for more complex tasks. This enhances RLHF by incorporating AI components (Fernandes et al., 2023). The _x_ in _RLxF_ signifies a blend of AI and humans. We further explore concrete methodologies about _RLxF_ in the subsequent section. **Reinforcement Learning from AI Feedback (RLAIF)** RLAIF serves as an extension to RLHF. RLAIF extends the pipeline Bai et al. (2022a) found that LLMs trained via RLHF may avoid sensitive and contentious issues, potentially reducing models’ overall utility. To address these limitations, Bai et al. (2022b) proposed a training pipeline that uses feedbacks generated by the LLMs ( _e.g._, GPT-4 or other language models). Following pre-set criteria, the policy model self-evaluates and revises its responses during _red teaming_ . The initial policy model is then fine-tuned using the revised responses. Finally, the fine-tuned policy model evaluates the harmlessness of another language model’s responses (i.e., AI feedback). Similar to RLHF, a reward trained using this feedback to optimize the policy model. Lee et al. (2023a) compares the performance of models trained with RLAIF and RLHF on summarization tasks. Their results suggest that models trained with AI feedback aperformed almost identically to those trained with human feedback, though subtle differences remain. Conversely, Findeis et al. (2024) explored the inverse problem of CAI: _given a dataset of feedback, how can one extract a constitution that best enables a_ _LLM to reconstruct the original annotations?_ This problem not only converts AI feedback from preferences into a corresponding constitution but also offers a method for synthesizing new preference data for AI feedback. **Reinforcement Learning from Human and AI Feedback (RLHAIF)** RLHAIF integrates human and AI models to provide oversight. Wu et al. (2021) explores the feasibility of using AI to assist humans in summarizing books. This method facilitated human supervision and evaluation of the model performance by decomposing the book summarization task into subtasks, creating a tree-like structure. Meanwhile, Saunders et al. (2022) explores the use of AI to assist in human assessment of model efficacy. Their findings suggest that model-generated critiques help humans identify flaws they might have missed. Bowman et al. (2022) proposes a proof-of-concept experiment to demonstrate the potential of scalable oversight techniques based on _sandwiching_ (Cotra, 2021). When collaborating with an unreliable LLM, the outcomes reveal that humans significantly surpass the model and themselves. Perez et al. (2023) employs language models to autonomously generate datasets for evaluating the behavior of language models of varying scales. The authors produced 154 high-quality datasets validated by humans. These methods demonstrate the feasibility of using AI assistance to scale up human oversight over complex problems and various domains. To some extent, RLAIF and RLHAIF offers a viable alternative for creating a training loop with minimal human intervention, thus reduciing training costs. AI supervision obeying transparent and accessible AI behavior guidelines may significantly aid in achieving scalable oversight (Bowman et al., 2022). **Discussion** Efforts are underway to enhance RLHF by replacing pure humans alone (Leike et al., 2018). Given the multidimensional nature of human feedback, various approaches have been devised to offer focused human 27 2.4 Scalable Oversight: Path towards Superalignment judgments informed by specific rules. Examples of such rules encompass considerations like chat fluency (Saunders et al., 2022) and privacy safeguards (Carr, 2023). Saunders et al. (2022) deconstructs the requirements for quality dialogue into natural language guidelines that an agent should adhere to, asking for evaluations on each guideline individually. We can attain more efficient rule-conditioned reward models by collecting targeted human assessments and training models on this data. This approach substantially enhances the efficacy of dialogue agents, rendering them more helpful, accurate, and benign when compared to prompted language models. Carr (2023) proposes Reinforcement Learning from Privacy Feedback (RLPF), aiming to harmonize the output quality of language models with safeguarding privacy. The method exploits NLP techniques to conduct real-time privacy risk assessments of text generated by the models and subsequently adjusts the reinforcement learning feedback signals based on these evaluations. Expressly, if the generated text includes sensitive information, it incurs negative feedback, whereas high-quality, non-revelatory text receives positive feedback. As the model undergoes training, it incrementally refines its capabilities, enhancing text quality and minimizing privacy breaches concurrently. This approach offers a more efficient evaluation of privacy risks by employing established NLP techniques, in contrast to conventional learning methods, which depend heavily on large-scale manual data annotation. At their core, the _RLxF_ methods utilize the strategy of decomposing a large problem into smaller sub-problems, enabling the use of more efficient tools, such as AI and software, for rapid sub-problem resolution. By leveraging the solutions to these sub-problems, the resolution of the main issue can be expedited. These techniques can be regarded as elementary instances of IDA; the primary distinction lies in the absence of a continual iterative process. Nonetheless, evidence suggests they are promising to offer feedback for AI systems that exceed human performance (Wu et al., 2021). Consequently, these methods can serve as foundational techniques in the training of more advanced AI systems. **2.4.2** **Iterated Distillation and Amplification** Iterated Distillation and Amplification (IDA) introduces a framework for constructing scalable oversight through iterative collaboration between humans and AIs (Christiano et al., 2018). The process commences with an initial agent, denoted as _A_ [0], which mirrors the decision-making of a human, _H_ . _A_ [0] undergoes training using a potent technique that equips it with near-human-level proficiency (the distillation step); Then, collaborative interaction between _H_ and multiple _A_ [0] instances leads to the creation of an enhanced agent, _A_ [1] (the amplification step). The successive process is described [27] in Algorithm 1. Cotra (2018) distinguishes between broad and narrow definitions within both RL and IRL. Broad RL gives sparse reward signals to AI systems and allows autonomous exploration and optimization of cumulative future rewards. This can lead to super-human novel strategies but makes it hard to specify what we care about perfectly. Narrow RL gives dense feedback rewarding the reasonableness of choices instead of final outcomes. This makes ML systems more human-like but limits capabilities. Similarly, broad IRL infers deep long-term values from the full range of human behaviors, while narrow IRL only infers short-term instrumental values. The former is a higher risk, while the latter is limited in capabilities. During IDA training, narrow techniques are needed to ensure each agent itself mimics human behaviors. Specifically, narrow RL or IL can be used to train the agent to be as human-like and controllable as possible. Humans can leverage agents’ computing power and parallelizability to devise more far-sighted, macro strategies. This is essentially an amplification of human intrinsic capabilities. In the next iteration, agents again mimic this strengthened human-machine system using narrow techniques. This enables a gradual transition from narrow ability to broad ability while keeping the agents aligned with human values. As iterations increase, the human-machine system becomes more and more capable, gradually approximating a system that is both highly capable and aligned with human values, achieving both safety and capability. In other words, Narrow techniques are used to ensure agents follow human values, while the broadened human strategies in the amplification stage are a way of utilizing the agents, and do not expand the agents’ own learning goals. IDA is well illustrated by AlphaZero (Christiano et al., 2018; Nguyen, 2020). The algorithm starts with a simple policy ( _e.g._, random move selection) and learns from its self-play games, the _amplification_ phase. It then uses these games as training data to develop better move selection heuristics, the _distillation_ phase. This distillationamplification process can be repeated to create a fast and proficient Go-playing AI. Here, the distinction between alignment and capability is crucial (Mennen, 2018). An aligned but less capable AI tries to win but may not succeed against moderate opponents. A capable but poorly aligned AI achieves certain game properties other than winning. The goal is that AI is capable and aligned, proficient at the game, and aligned with the goal of winning the game. The feasibility of IDA has sparked considerable debate (Yudkowsky, 2018). IDA operates under a crucial assumption that _errors won’t continuously accumulate throughout the iterations_ (Leike et al., 2018). Thus, technical challenges persist during the distillation and amplification step, necessitating sufficiently advanced and safe learn 27We reference the pseudo-code by Cotra (2018) for this description. 28 2.4 Scalable Oversight: Path towards Superalignment **Algorithm 1** Iterative Distillation and Amplifcation 1: **procedure** IDA( _H_ ) 2: _A_ ← random initialization 3: **repeat** 4: _B_ ← AMPLIFY( _H,A_ ) 5: _A_ ← DISTILL( _B_ ) _▷_ Repeat indefinitely 6: **until** False 7: **end procedure** 8: **procedure** DISTILL(overseer) **return** An AI trained using narrow, robust techniques to perform a task that the overseer already understands how to perform. 9: **end procedure** 10: **procedure** AMPLIFY(human, AI) _▷_ Interactive process in which human uses many calls to AI to improve on human’s native performance at the relevant tasks. 11: **end procedure** ing techniques. Additionally, despite the original authors likening IDA to the training process of AlphaZero (Silver et al., 2017) and having demonstrated it in toy environments (Christiano et al., 2018), its practicality hinges on ensuring that _H_ can delegate portions of complex tasks to A, analogous to a leader orchestrating a team to accomplish a project collectively. In practice, Gato (Reed et al., 2022) illustrates key aspects of IDA (Mukobi, 2022) that may pave the way to AGI. It consolidates the abilities of multiple expert AIs into a singular model, validating that IDA’s distillation can be achieved using contemporary deep learning. While not fully realized, Gato hints at amplification potential, harnessing its diverse skills to accelerate the learning of new tasks. However, Gato lacks safe amplification or distillation methods to maintain alignment properties. Crafting alignment-preserving IDA methods suited for models like Gato remains a crucial direction for AI safety research. In essence, while Gato signifies notable progress in actualizing IDA, further theoretical advancements are imperative to ensure that the IDA framework leads to safe AGI. **2.4.3** **Recursive Reward Modeling** As discussed in §2.2, reward modeling leverages the idea of using human feedback to train a reward model, which an agent then pursues. It allows us to disentangle the construction of the system’s objective from evaluating its behavior (Ibarz et al., 2018). In this manner, the reward model provides insights into the optimization direction of the AI system. Particularly noteworthy is the ability to finely align the system with human intentions and values, such as fine-tuning language models to adhere to human instructions (Bai et al., 2022a; Touvron et al., 2023). Also, reward modeling has proved valuable in advancing AI research (Zhao et al., 2023; Bukharin et al., 2023). Recursive Reward Modeling (RRM) (Leike et al., 2018) seeks to broaden the application of reward modeling to much more intricate tasks. The central insight of RRM is the recursive use of already trained agents _At_ −1 to provide feedback by performing reward learning on an amplified version of itself for the training of successive agents _At_ on more complex tasks. The _A_ 0 is trained via fundamental reward modeling (learned from pure human feedback). This approach is not only influenced by human feedback but also by the model’s own assessments of what constitutes a rewarding outcome. If the assumption that _evaluating outcomes is easier than producing behavior_ holds, then the iterative process of reward modeling can iteratively achieve higher capacity to oversee more powerful AI systems, paving the way for extending oversight into more complex domains. This process is detailed in Algorithm 2. For instance, we aim to train AI _A_ to devise a comprehensive urban plan. Designing a city entails numerous intricate elements, such as traffic planning, public amenities, and the distribution of residential and commercial zones. Evaluating a city’s entire design poses a significant challenge since many issues may only become apparent after extended real-world testing. To aid this process, we may need an agent _B_ specifically for traffic planning. However, traffic planning in itself is a multifaceted task. Consequently, we further need other agents to assess aspects such as road width, traffic flow, and the design of public transportation. For every sub-task, such as gauging road width, we can train an auxiliary agent to verify if safety standards are met, if various modes of transportation have been considered, and so on. In doing so, we establish an RRM process where each agent is trained with the help of agents assessing sub-tasks. This approach resembles the organizational structure of a large corporation (Leike et al., 2018). In the context of urban planning, the main planning team (the CEO) is responsible for the final design decisions. Their decisions are informed by recommendations from the traffic team (the department managers), who, in turn, base their recommendations on inputs from the road width team (the managers), and so forth. Each level of decision-making 29 2.4 Scalable Oversight: Path towards Superalignment **Algorithm 2** Recursive Reward Modeling 1: Initialize agent _A_ 0 using reward modeling based on user feedback. _▷_ Either preferences or numerical signals. 2: **for** _t_ = 1, 2, ... **do** 3: Use _At_ −1 to assist users in evaluating outcomes. 4: Train agent _At_ based on user-assisted evaluations. _▷_ Objective of _At_ is generally more complex than that of _At_ −1. 5: **end for** relies on feedback from the level below it, with each task optimized through reward modeling. The challenges faced by RRM can be described around the concepts of outer and inner alignment (Hubinger, 2020). Outer alignment revolves around the sufficiency of feedback mechanisms to guarantee that the learned reward model is accurate in the domain perceived by the action model as on distribution. This challenge is contingent on several factors, including the quality of human feedback, the difficulty of generalization, and the potential for agent deception. Conversely, inner alignment concentrates on how effectively a human can employ transparency tools to prevent deceptive or disastrous behaviors in both the reward model and the agent. This hinges on the effectiveness of the oversight mechanism and the capacity to verify that the reward model isn’t undergoing any optimization and that the agent remains myopic (Cotra, 2018). Potential approaches to mitigate these challenges (Leike et al., 2018) include online feedback to correct the reward model during training (Christiano et al., 2017), off-policy feedback to teach about unsafe states (Everitt et al., 2017), leveraging existing data like videos and text via unsupervised learning or annotating (Baker et al., 2022), hierarchical feedback on different levels (Bukharin et al., 2023) adversarial training to discover vulnerabilities (Madry et al., 2018), and uncertainty estimates for soliciting feedback (Hadfield-Menell et al., 2016; MacGlashan et al., 2017). The strength of RRM is its competitive training approach, which necessitates human feedback instead of demonstrations, potentially making feedback more reliable and simpler to obtain (Hubinger, 2020). In essence, the process of RRM can be likened to IDA (Christiano et al., 2018), where reward modeling takes the place of supervised or imitation learning. Thus, the challenges confronted by RRM closely mirror those encountered in IDA, particularly in preventing the accumulation of errors. Additionally, reward modeling itself does not necessarily distill a _narrow_ model (Cotra, 2018), which presents challenges in trading off the degree of alignment and performance. **2.4.4** **Debate** _Debate_ involves two agents presenting answers and statements to assist human judges in their decision-making (Irving et al., 2018), as delineated in Algorithm 3. This is a zero-sum debate game where agents try to identify each other’s shortcomings while striving to gain higher trust from human judges, and it can be a potential approach to constructing scalable oversight. For example, in the game of Go, human judges might not discern the advantage side of the single game board itself. However, by observing the game’s process and the eventual outcome, these judges can more easily deduce that. The premise of this method relies on a critical assumption: _arguing for truth is generally easier than for false-_ _hood_, granting an advantage to the truth-telling debater. However, this assumption does not hold universally. For instance, in a complex problem, humans might fail to comprehend the specialized concepts used in the debate. Additionally, the limited nature of the gradient descent may bring us to an undesirable cyclic pattern ( _i.e._, when optimizing for one property, such as honesty and highlighting flaws, models often overlook or diminish another) (Irving et al., 2018). It’s worth mentioning that with the advancement of LLMs’ capabilities, we can already see practical examples of debate (Du et al., 2023; Claude, 2023). Challenges may arise for debate in specific real-world scenarios (Irving et al., 2018). For example, certain questions may be too intricate for human comprehension or too voluminous to present in their entirety. Similarly, there are instances where an optimal answer to a question is exceedingly lengthy, envision a response that spans a hundred pages. To navigate these, agents might initially select a response and, as the debate progresses, reveal sections of either the question or the answer. Irving et al. (2018) conducts a toy experiment on this process. Meanwhile, we must acknowledge the limit of human time. In scenarios that necessitate interaction with the environment, such as directing a robot, each action might demand a distinct debate. It’s not always feasible for humans to judge every debate due to time constraints. In response to this challenge, we may need to design ML models to predict human feedback. In line with this observation, Khan et al. (2024) experimented with using smaller, non-expert models as judges in debates between two expert models, both of which had access to the underlying data and the ability to quote from it. The experiments demonstrated that these smaller non-expert models were able to achieve higher accuracy when relying on the expert model debates, though they still underperformed compared to human judges. Additionally, the expert models can be optimized 30 2.4 Scalable Oversight: Path towards Superalignment **Algorithm 3** Debate 1: Initialize set of questions _Q_ . 2: Initialize two competing agents. 3: Select a question _q_ ∈ _Q_ . _▷_ Question is shown to both agents. 4: Agents provide their answers _a_ 0 and _a_ 1. The agents generate comment answers in response to _q_ . 5: Initialize debate transcript _T_ as an empty list. 6: **for** turn in predefined number of debate turns **do** 7: Agent makes a debate statement _s_ . 8: Append _s_ to _T_ . _▷_ Agents take turns and statements are saved in the transcript. 9: **end for** 10: Judge observes ( _q,a_ 0 _,a_ 1 _,T_ ) and decides the winning agent. for persuasiveness, enabling the judges to attain even greater accuracy and more easily identify the truth. The authors emphasize that debate implementations must be grounded in verifiable evidence to prevent debaters from fabricating facts. Further work on using weaker models as judges in debates guided by stronger models was conducted by Kenton et al. (2024). Their experiments focused on tasks involving both information asymmetry and symmetry between debaters and judges and were extended to include multimodal inputs. The protocols they applied evaluated the baseline performance of judges without debate protocols, alongside debate and consultancy protocols. These experiments considered both assigned positions and cases where debaters or consultants could choose their positions. Experimental results showed that debate consistently outperforms consultancy. Weak judges struggle to fully leverage debate protocols, and consultancy can significantly reduce the accuracy of judges, particularly when the consultant advocates for an incorrect solution. Overall, the authors interpret their findings as only weakly promising for the debate framework. However, these experiments were conducted solely with models at inference time, and debate protocols may hold greater potential when integrated into training. This is particularly relevant given that the task of judging a debate can be seen as OOD for models primarily fine-tuned for question answering. Another consideration is the convergence of the debate mechanism (Irving et al., 2018). Du et al. (2023) showcases the inherent tendency of the debate framework to eventually converge toward singular responses, even if accuracy is not guaranteed. Meanwhile, if challenges arise in achieving convergence, we might have to rely on intuition to gauge the effectiveness of convergence. This implies the requirement of human evaluators’ intervention and demands a certain level of expertise from these human assessors, posing challenges that must be addressed. Furthermore, there are many discussions originating from diverse perspectives. Ngo (2021) considers _Debate_ as one type of iterated amplification but more specific to make safety ground in concrete research questions, and its adversarial framing makes it easier to spot problems. Michaelcohen (2020) expresses concerns regarding the adverse implications of incentivizing debaters to employ deceptive strategies aimed at swaying the judgment process. Armstrong (2019); Barnes (2020) expound upon the various issues that can permeate the debate process, including challenges such as the obfuscated arguments problem, ambiguous responses, and the propagation of misleading implications. While one may affirm the presence of a sufficiently low probability of any underlying flaws within the argument, advocating for trustworthiness, the opposing debater may assert the existence of a sufficiently high probability of identifying a flaw within the argument somewhere, thus advocating for a lack of trust. Beth Barnes (2020) introduces the concept of _cross-examination_ to incentivize debaters to provide more informative responses. In this process, debaters have the agency to select a prior claim for scrutiny and obtain a copy of the opposing debater’s response. The entire exchange is documented, and debaters can present relevant segments to the judge. The introduction of cross-examination is a robust deterrent against dishonest debaters exploiting a sweeping narrative, in contrast to their prior arguments, to mislead the judge. There exists a notable similarity between the debate (Irving et al., 2018), IDA (Christiano et al., 2018), and RRM (Leike et al., 2018). These approaches can be comprehended in the view of an underlying principle: _evaluation_ _can be simpler than task completion_ [28] . Therefore, harnessing the evaluative capabilities of AI systems can result in distributions of capacity that are more advantageous for humans. The challenges these methods face, especially in mitigating the accumulation of errors, are also analogous. **2.4.5** **Cooperative Inverse Reinforcement Learning** Almost all previous methods consider learning from feedback a process separate from inference and control and often implicitly consider feedback providers as entities existing outside of the environment – indeed, failure modes like manipulation (Shevlane et al., 2023) and reward tampering (Everitt et al., 2021) occur exactly when feedback 28Discussions about this can also be found in the literature about these methods. 31 2.4 Scalable Oversight: Path towards Superalignment mechanisms that are supposedly outside of the environment become part of it and therefore subject to the AI system’s influence. The framework of Cooperative Inverse Reinforcement Learning (CIRL), however, unifies control and learning from feedback and models human feedback providers as fellow agents in the same environment. It approaches the scalable oversight problem not by strengthening oversight but by trying to eliminate the incentives for AI systems to game oversight, putting humans giving feedback and the AI system in cooperative rather than adversarial positions (Shah et al., 2020). In the CIRL paradigm, the AI system collaborates with humans to achieve the human’s true goal rather than unilaterally optimizing for human preferences. **Motivation and General Idea of CIRL** Many modes of misalignment, including, for example, reward hacking (Victoria et al., 2020; Skalse et al., 2022), deception (Park et al., 2023b), and manipulation (Shevlane et al., 2023), are results of the AI system confidently optimizing for misspecified objectives (Pan et al., 2021). During training and deployment, the specified objective ( _e.g._, the reward function) plays the role of an unchallengeable truth for the AI system, and human feedback is only respected to the extent specified in the objective, which means that it could be tampered (Everitt et al., 2021) or manipulated (Shevlane et al., 2023). CIRL (Hadfield-Menell et al., 2016, 2017b; Shah et al., 2020) attempts to mitigate this problem by (1) having the AI system explicitly hold uncertainty regarding its reward function, and (2) having humans provide the only information about what the reward function truly is. This uncertainty gives the AI system a tendency to defer to humans and a drive to determine what the human truly wants. Concretely speaking, it models the entire task as a two-player cooperative game, where the _human player H_ and the _robot player R_ share a common reward function _r_ (·). Importantly, the reward function and reward signals aren’t visible to _R_ (and indeed aren’t explicitly calculated by the training mechanism) and are only inferred by _R_ from behaviors of _H_ via an IRL-like process (including by asking and interacting with _H_ ). This game has been called the _CIRL_ (Hadfield-Menell et al., 2016), the _assistance_ _game_ (Fickinger et al., 2020), and the _assistance POMDP_ (Shah et al., 2020). In short, the AI system has the human’s true objective _r_ (·) as its own goal (despite not knowing values of _r_ (·) with certainty) and constantly tries to figure _r_ out by observing and interacting with the human. This reduces incentives for, _e.g._, manipulation since manipulation of human behaviors only serves to pollute an information source and does not affect _r_ . **Formulation of CIRL** Hadfield-Menell et al. (2016) characterizes the settings of CIRL (which we denote by _M_ ) by building upon classical multi-agent MDPs, resulting in the definition below of _M_ . - _M_ = S _,_ {A [H] _,_ A [R] } _,T,γ,r,_ Θ _,P_ 0 In the equation above, _S_ and {A [H] _,_ A [R] } are the space of world states and actions respectively, _T_ : _S_ ×A [H] ×A [R] → ∆( _S_ ) is the transition function, and _γ_ is the discount rate. Up to here, the definition is identical to that of a standard multi-agent MDP. The remaining elements, however, introduce the key difference: the reward function is parameterized, and its parameters can be modeled by a distribution. Θ is the space of values for the parameters _θ_ ; _r_ : _S_ × A [H] × A [R] × Θ → R is the shared reward function, and _P_ 0 ∈ ∆( _S_ × Θ) is the joint distribution of the initial state and the reward function’s parameters. This parameterization approach allows _R_ to model explicitly and reason about its belief over the true reward function. Using techniques from Nayyar et al. (2013), any CIRL setting can be reduced to an equivalent single-agent POMDP, thus proving the existence of optimal policies that are relatively tractable (Hadfield-Menell et al., 2016). **Notable Directions in CIRL Research** Although some have emphasized the importance of _H_ teaching _R_ (Fisac et al., 2020) actively, works (Shah et al., 2020) have contested the emphasis on game equilibria and joint policies (including _H_ ’s pedagogic behaviors), and instead focuses on _R_ ’s optimal response to a policy of _H_ ’s, since the assumption that humans will always act on optimal joint policies is an unrealistic one. More specifically, Shah et al. (2020) considers the _policy-conditioned belief B_ : Π [R] → ∆ Π [H][�], which specifies _H_ ’s distribution over policy responses to any of _R_ ’s policies, and the aim is to find _R_ ’s optimal policy given _B_ . Here, _B_ is essentially a form of human modeling, and one challenge is to obtain a robustly accurate human model as _B_ (Hong et al., 2022). On another front, Hadfield-Menell et al. (2017b) and He and Dragan (2021) examine the manual specification of an imperfect reward function as a way for _H_ to convey information about the true reward function. This includes work on _R_ ’s side ( _i.e._, enabling _R_ to perform inference on the true reward function based on the imperfect specification) (Hadfield-Menell et al., 2017b) and also work on _H_ ’s side ( _i.e._, developing algorithmic tools to assist _H_ in making more robust specifications that better convey the true reward function) (He and Dragan, 2021). Aside from improvements to the game settings, the design of more scalable CIRL algorithms has also been recognized as a priority. There has also been work that extends CIRL and assistant games to multi-agent settings (Fickinger et al., 2020) where there are multiple humans that the robot needs to serve. This corresponds to the _multi/single delegation_ 32 settings in Critch and Krueger (2020), where the varying objectives of humans create a challenge and necessitate the use of social choice methods. **2.4.6** **Circuit Breaking** Instead of training a model to refuse harmful outputs, the circuit-breaking approach proposed by Zou et al. (2024a) directly controls the internal representations responsible for generating such outputs. A potential advantage of this method is that it aims to enhance safety without compromising performance. The core idea is to manage the model’s ability to generate harmful outputs, rather than eliminating the underlying vulnerabilities. This approach renders circuit-breaking attack-agnostic, as new adversarial attacks may emerge, but the internal representations associated with harmful outputs remain constant. The circuit-breaking method consists of two essential components: the dataset and the loss function. The dataset includes harmless samples, where internal representations should remain unchanged, while the circuit-breaking set comprises harmful samples, which require altered internal representations to prevent the generation of harmful content. By employing a mean squared error loss for retaining representations and cosine similarity for circuitbreaking representations, the model becomes unable to generate harmful content. Changing the loss function for circuit-breaking can also allow for steering the model’s generation in various directions, such as ending the output generation or refusing to answer. Additionally, this approach can be extended with a Harmfulness Probing mechanism to detect and respond to harmful generations. The authors successfully applied this method to Large Language Models, Multimodal models, and Language Agents to control function calls. **2.4.7** **Weak-to-Strong Generalization** Scalable Oversight can help humans provide supervision signals to AI systems that are smarter and more complex, ensuring that the behaviors of super-human-level AI systems align with human intent and values. However, what if we cannot obtain scalable supervision signals? An example is that for some tasks, evaluation is not necessarily simpler than generation, making it impossible to utilize task decomposition followed by AI assistance to achieve scalable oversight. Recently, a generalization phenomenon called _Weak-to-Strong Generalization_ is verified, the core idea of which is to use weak supervision signals from a weak model to train a strong model (Burns et al., 2023). Specifically, the weak model is trained on ground truth and then annotates new data with weak labels for training the strong model. The results across three settings ( _i.e._ NLP classification, chess puzzles and reward modeling) reflect that _weak-to-strong generalization_ is a robust phenomenon, yet there is room for further improvement, such as narrowing the gap between a strong model trained with weak labels and ground truth. _Weak-to-Strong Generalization_ provides a valuable analogy for the superalignment problem: how humans can supervise super AI systems as weak supervisors. The insight behind _weak-to-strong generalization_ is that the strong model can generalize beyond weak labels instead of merely imitating the behavior of weak models. In other words, the weak model elicits the strong model’s capability. However, verifying _weak-to-strong generalization_ is challenging if humans don’t know the ground truth. Nonetheless, _weak-to-strong generalization_ still offers a valuable perspective for solving the superalignment problem. The framework for _weak-to-strong generalization_ has been further expanding and integrating with scalable oversight. Empirical results show that weak models can evaluate the correctness of stronger models by assessing the debate between two expert models (Khan et al., 2024). Additionally, making expert debaters more persuasive improves non-experts’ ability to discern truth in debates, evidencing the effectiveness of aligning models with debate strategies without ground truth. Some frameworks employ a external amplifier to create an iterated distillation and amplification process, which presents a potential framework for integrating _weak-to-strong generalization_ techniques with IDA during the training process (Ji et al., 2024a). Moreover, Leike (2023a) proposes several methods to integrate scalable oversight with _weak-to-strong generalization_ techniques, _e.g._, recursively decomposing tasks into atomic ones (in line with scalable oversight principles), supervising these atomic tasks, and employing reward models trained with _weak-to-strong generalization techniques_ using human preference data. **3** **Learning under Distribution Shift** The construction of reliable AI systems is heavily dependent on their ability to adapt to diverse data distributions. Training data and training environments are often imperfect approximations of real deployment scenarios and may lack critical elements such as adversarial pressures (Poursaeed et al., 2021) ( _e.g._, Gaussian noise in the context of supervise learning-based systems (Gilmer et al., 2019) and shadow attack (Ma et al., 2012) in autonomousdriving systems), multi-agent interactions (Critch and Krueger, 2020; Dafoe et al., 2021), complicated tasks that 33 3.1 The Distribution Shift Challenge Figure 6: Framework of learning under distribution shift. The main challenges stemming from the distribution shift are goal misgeneralization and auto-induced distribution shift (§3.1). In our framework, we also introduce two kinds of methods to address distribution shift: algorithmic interventions (§3.2) that steer optimization during training, and data distribution interventions (§3.3) that expand the training distribution in a targeted manner by introducing real-world elements. human overseers cannot efficiently evaluate (Leike et al., 2018), [29] and reward mechanisms that can be gamed or manipulated (Krueger et al., 2020). This discrepancy between training distribution and testing distribution (or environments) is known as _distribution shift_ (Krueger et al., 2020; Thulasidasan et al., 2021). Therefore, AI systems that are aligned under their training distribution ( _i.e._, pursuing goals that are in line with human intent) may not uphold their alignment under deployment (or testing) distribution, potentially leading to serious misalignment issues post-deployment. This potential failure motivates research on the preservation of alignment properties ( _i.e._, adherence to human intentions and values) across data distributions. From an alignment perspective, we are more concerned about AI systems pursuing unaligned and harmful goals, as opposed to incompetence at pursuing goals. Thus, the emphasis on alignment properties means that we focus on the generalization of _objectives_ across distributions, as opposed to the generalization of _capabilities_ (Di Langosco et al., 2022; Ngo et al., 2024). We mainly discuss the preservation of alignment properties when learning under distribution shift in this section. We start the discussion by introducing the alignment challenges from distribution shift (§3.1). Subsequently, we delve into methods for addressing distribution shift, and discuss two approaches in particular: (1) algorithmic interventions (§3.2) that steer optimization during the training process, and (2) data distribution interventions (§3.3) that expand the training distribution by introducing specific elements into the training process, including adversarial training (Yoo and Qi, 2021; Bai et al., 2021; Ziegler et al., 2022) and cooperative training (Dafoe et al., 2021) (§3.3.2). Our framework for learning under distribution shift is shown in Figure 6. **3.1** **The Distribution Shift Challenge** Before introducing the specific techniques, we initially demonstrate why one of the primary challenges in alignment is learning under distribution shift, and more specifically, the preservation of _alignment properties_ ( _i.e._, adherence to human intentions and values) under distribution shift. We introduce two alignment challenges concerning the issue of distribution shift, namely goal misgeneralization (Di Langosco et al., 2022) and auto-induced distribution shift (ADS) (Krueger et al., 2020). 29This could contribute to the emergence of deceptive behaviors (Hubinger et al., 2019a). See the paragraph on _goal misgen-_ _eralization_ in §3.1 for details. 34 3.1 The Distribution Shift Challenge The training of AI systems optimizes for their adherence to the pursuit of the training reward/loss under the training input distribution. However, this adherence may not generalize to cases where the input distribution undergoes qualitative changes, _i.e._, distribution shift. These changes include, for example, adversarial pressures (Poursaeed et al., 2021), multi-agent interactions (Critch and Krueger, 2020), and complicated tasks that human overseers cannot efficiently evaluate (Di Langosco et al., 2022), and reward mechanisms that can be gamed or manipulated (Krueger et al., 2020). It’s worth distinguishing two different failure modes here: goal misgeneralization (Di Langosco et al., 2022), in which the original and shifted distributions are given, and auto-induced distribution shift (Krueger et al., 2020), where the AI system alters the data distribution with its own behaviors in pursuit of reward. **Goal Misgeneralization** This kind of challenge refers to the scenario where AI systems perform perfectly in the training distribution, but the capabilities learned in training distribution fail to generalize in OOD deployment, and AI may present the pursuit of goals that are not in accordance with human wishes (Di Langosco et al., 2022). Goal misgeneralization [30] is to be distinguished from other forms of misgeneralization ( _e.g._, capability misgeneralization) where the agent becomes incompetent in OOD settings; instead, agents with goal misgeneralization _competently_ pursue an _unwanted_ goal in OOD settings. A simplistic example is the case of _spurious correlations_ (or _shortcut features_ ) (Geirhos et al., 2019; Di Langosco et al., 2022). For example, in an image classification dataset, green grass is a highly predictive feature for the label _cow_ . However, it is essential to note that this feature needs to be more consistent and reliable across various data distributions (Murphy, 2023). Moreover, the causal confusion ( _i.e._, ignorant of the causal structure of the interaction between the advisor and the environment) in IL can result in goal misgeneralization (De Haan et al., 2019; Tien et al., 2022). One major danger from goal misgeneralization lies in the indistinguishability between “optimizing for what human really wants” and “optimizing for human thumbs-ups”; [31] the latter includes potentially deceiving or manipulating human evaluators (Shevlane et al., 2023) to receive their thumbs-ups. For example, Amodei et al. (2017) discovered that in a task where a robotic hand is supposed to grasp a small ball, the robotic hand fakes the action by using parallax in front of the lens to appear as if it has grasped the ball, without actually doing so. This behavior deceives the human annotator into thinking that the task has been completed. When an AI system is trained or finetuned with human feedback, it is impossible to distinguish the two goals since both perform perfectly in training, and it is unclear which one the AI system will learn. In fact, during training, the human evaluators might be deceived or manipulated, implying that the AI system may be more strongly incentivized to optimize for human thumbs-ups rather than what the human wants. Current examples of this phenomenon exist in recommender systems (Kalimeris et al., 2021; Adomavicius et al., 2022), LLMs (Perez et al., 2023), and RL systems (Amodei et al., 2017). Finally, one failure mode closely related to goal misgeneralization is the misalignment of _mesa-optimizers_ (Hubinger et al., 2019c), where the ML model with learned model weights performs optimization within itself during inference (“mesa-optimization”) (Hubinger et al., 2019c; Dai et al., 2023), and the objective of this optimization is not aligned with the model’s training objective. **Auto-Induced Distribution Shift (ADS)** While training AI systems, we often consider the strengths and weaknesses of the agents themselves only and overlook the impact that these agents have on the environment. Past research often assumed that data is independently and identically distributed (Besbes et al., 2022), ignoring the effect of algorithms on data distribution. However, Krueger et al. (2020) posited that, in reality, agents could influence the environment during the decision-making and execution process, thus altering the distribution of the data generated by the environment. They referred to this type of issue as ADS. A real-world example is in recommendation systems, where the content selected by the recommendation algorithms might change users’ preferences and behaviors, leading to a shift in user distribution. The distribution shift, in turn, further affects the output of the recommendation algorithms (Carroll et al., 2022). As AI systems increasingly impact the world, we also need to consider the potential further impacts on the data distribution of the entire society after agents are integrated into human society. **Superficial Alignment** In recent work, the technique of _Inverse Alignment_ in LLMs was introduced by Ji et al. (2024c). The study focused on the elasticity of LLMs, which describes their tendency to revert to a state resembling their original pretrained form after further finetuning post-alignment. This behavior, observed in multiple studies (Yang et al., 2023b; Zhou et al., 2024), suggests that alignment may not be a permanent change, as models can easily lose their aligned behavior when subjected to new fine-tuning tasks. 30More examples of goal misgeneralization exist (DeepMind, 2020). 31Here, _human thumbs-ups_ refer to high-reward feedback from human advisors or environment. However, AI systems may deliberately follow human preferences or deceive to get high rewards from humans, but actually don’t really learn intended goals ( _i.e._, what human really wants). 35 3.2 Algorithmic Interventions Figure 7: A tree diagram summarizing the key concepts and literature related to Algorithmic Interventions. The root node represents Algorithmic Interventions that aim to steer optimization during the training process. The main branches represent two main methods, namely cross-distribution aggregation (which aims to minimize risks on different distributions during training to find a predictor based on the invariant relationship instead of spurious features) and navigation via mode connectivity (which aims to fine-tune based on mode connectivity to enhance model generalization performance). Further sub-branches list vital techniques such as Distributionally Robust Optimization (DRO), Invariant Risk Minimization (IRM), Risk Extrapolation (REx), and Connectivity-based Finetuning (CBFT). The authors formally define _inverse alignment_ as follows: Given an initial LLM _pθ_ 0, after aligning it on a dataset _Da_ to produce the aligned model _pθ_ 1, an operation is performed using a much smaller dataset _Db_ (where | _Db_ | ≪| _Da_ |), resulting in an inverse-aligned model _pθ_ 0 [′] [. The goal is to ensure that] _[ ρ]_ [(] _[p][θ]_ 0 [′] _[,p][θ]_ [0][)][ ≤] _[ϵ]_ [ for some metric] _ρ_, which measures behavioral or distributional similarity between models. This process is referred to as _inverse_ _alignment_, and the return from _pθ_ 1 to a model similar to _pθ_ 0 is the essence of elasticity. Elasticity is further formalized as the property of LLM parameters returning to a state close to _pθ_ 0, given an algorithmically simple inverse transformation _g_ applied to _pθ_ 1, along with a dataset _Db_ . The dataset size constraint | _Db_ | ≪| _Da_ | ensures that a small amount of data suffices to reverse the effects of alignment, leading to _pθ_ 0 [′] [such that] _[ ρ]_ [(] _[p][θ]_ 0 [′] _[,p][θ]_ [0][)][ ≤] _[ϵ]_ [.] Theoretical findings show that the normalized compression rate for both the pretraining and fine-tuning datasets decreases upon additional fine-tuning, but the reduction is more pronounced for the fine-tuning dataset by a factor of Θ( _k_ ), where _k_ = [|] | _[D]_ _D_ [1] 2 [|] | [,] _[ D]_ [1][ is the pretraining dataset, and] _[ D]_ [2][ is the fine-tuning dataset. This suggests that models] are more likely to forget the distribution in the fine-tuning dataset while maintaining their pretraining distribution after exposure to new data. Experimental results further confirm the existence of elasticity in LLMs, showing that they tend to revert to the pretraining distribution with fewer training samples than required during the alignment phase. Larger models, and those with more extensive pretraining data, exhibited greater elasticity, highlighting the limitations of current alignment methods. The theoretical and experimental results indicate that alignment methods are often superficial, and more robust approaches are necessary to ensure the safety of AI systems, particularly in the face of _inverse alignment_ . Moreover, these findings underscore the risks associated with open-source models, as they could potentially be reverted to unsafe states with minimal effort, raising concerns about open-source policies in large AI companies. **3.2** **Algorithmic Interventions** When illustrating the algorithmic intervention methods, we first outline two classes of methods that steer optimization on various distributions during training to relieve distribution shift, namely, cross-distribution aggregation (§3.2.1) and navigation via mode connectivity (§3.2.2). In the first part, we cover methods ranging from the initial approach of _empirical risk minimization_ (ERM) (Vapnik, 1991) to _risk extrapolation_ (REx) (Krueger et al., 2021), a method conceived to mitigate issues arising from models’ dependence on spurious features. In the second part, we introduce _connectivity-based fine-tuning_, which guides the navigation of the loss landscape during training to encourage convergence upon non-spurious correlations, and which does so using insights from _mode connectivity_ (Lubana et al., 2023). **3.2.1** **Cross-Distribution Aggregation** One of the main reasons for distribution shift is spurious correlations in the model that are distinct from core objectives (Geirhos et al., 2019). By integrating learning information of different domains (or different distributions) into the optimization objective, we expect the model to learn truthful information and invariant relationships. In the following paragraphs, we first introduce ERM as the background and then introduce some methods to directly learn how to address distribution shift by integrating loss landscapes of different distributions in the training process. 36 3.2 Algorithmic Interventions **Empirical Risk Minimization (ERM)** Consider a scenario where a model has been developed to identify objects by their features effectively. The optimization target can be expressed as: R( _w_ ) = L� _y,f_ ( _x,_ _w_ ) - _d_ P( _x,y_ ) where L( _y,f_ ( _x,_ _w_ )) denotes the loss between data labels _y_ and model outputs _f_ ( _x,_ _w_ ), while P( _x,y_ ) signifies the target data distribution (Vapnik, 1991). Nevertheless, a bias often exists between the dataset and the real world, implying that the features learned from the dataset may not necessarily be the ones we intend for the model to acquire. ERM is a strategy employed in statistical methods to optimize this bias. It operates on the assumption that, given the inaccessibility of the real-world target data distribution, the empirical data within the dataset should, ideally, closely approximate this unknown target distribution (Vapnik, 1991; Zhang et al., 2018b). In this context, the objective function is optimized and is redefined as: E( _w_ ) = [1] _l_ _l_ - L� _yi,f_ ( _xi,_ _w_ ) _i_ =1 where _l_ can be different examples in one training distribution or different training distributions. Minimizing the objective function above allows the model to learn the invariant relationship in different distributions. Naive ERM makes the naive assumption that the data is sampled from the target data distribution. However, if a significant discrepancy exists between the source distribution (or training distribution) and the target distribution, severe generalization issues can still arise (Szegedy et al., 2013). **Distributionally Robust Optimization (DRO)** Numerous studies posit that the sensitivity to distribution shift often arises from reliance on _spurious correlations_ or _shortcut features_ unrelated to the core concept (Geirhos et al., 2019; Hendrycks and Dietterich, 2018). For instance, models may judge based on background features rather than employing the correct features within the image (Geirhos et al., 2019; Beery et al., 2018). Building upon the foundations laid in prior research (Ben-Tal et al., 2009; Peters et al., 2015; Krueger et al., 2021), OOD Generalization can be formulated as follows: _r_ D [OOD] ( _θ_ ) = max _e_ ∈D _[r][e]_ [ (] _[θ]_ [)] This optimization seeks to enhance worst-case performance across a perturbation set, denoted as D, by reducing the maximum value among the risk function set { _re_ | _e_ ∈D}. In _Distributionally Robustness Optimization (DRO)_ (Duchi et al., 2021), the perturbation set covers the mixture of different domains’ training distributions, and by minimizing the above objective function, we expect the model can find the invariant relationship between different training distributions. However, it should be noted that naively applying DRO to overparameterized neural networks may lead to suboptimal outcomes (Sagawa et al., 2020). Therefore, combining DRO with increased regularization techniques such as _l_ 2 penalty (Cortes et al., 2009) or early stopping (Prechelt, 2002) can substantially improve generalization performance. For more details on DRO, see _e.g._, Rahimian and Mehrotra (2019); Sagawa et al. (2020); Lin et al. (2022a) **Invariant Risk Minimization (IRM)** Arjovsky et al. (2019) introduces an innovative learning paradigm to estimate nonlinear, invariant, causal predictors across diverse training environments, thereby facilitating robust OOD generalization. IRM aims to train a predictive model with solid performance across various environments while demonstrating reduced susceptibility to relying on spurious features. IRM can be considered an extension of Invariant Causal Prediction (ICP) (Peters et al., 2015), which involves hypothesis testing to identify the direct causal features that lead to outcomes within each specific environment instead of indirect features. IRM further extends ICP to scenarios characterized by high-dimensional input data, where variables may lack clear causal significance. The fundamental idea underlying IRM is that when confronted with many functions capable of achieving low empirical loss, selecting a function that exhibits strong performance across all environments is more likely to get a predictor based on causal features rather than spurious ones (Murphy, 2023). **Risk Extrapolation (REx)** The basic form of REx involves robust optimization over a perturbation set of extrapolated domains (MM-REx), with an additional penalty imposed on the variance of training risks (V-REx) (Krueger et al., 2021). By reducing training risks and increasing the similarity of training risks, REx forces the model to learn the invariant relationship in different domain distributions. Amplifying the distributional variations between training domains can diminish risk changes, thereby enforcing the equality of risks. Taking CMNIST (Arjovsky et al., 2019) as an example, even though establishing a connection 37 3.2 Algorithmic Interventions between color and labels is more straightforward than connecting logits and labels, increasing the diversity in color can disrupt this _spurious correlations_ (or shortcut features) and aid the model in learning the genuine invariant relationship between logits and labels. Following previous research (Vapnik, 1991; Peters et al., 2017; Krueger et al., 2021), REx can be formulated as follows: Firstly, the Risk Function can be defined as follows: - _re_ ( _θ_ ) � E( _x,y_ )∼ _Pe_ ( _X,Y_ ) _L_ _fθ_ ( _x_ ) _,y_ where _L_ (·) represents a fixed loss function, and distinct training domains or environments can be formulated as the _Pe_ ( _X,Y_ ) distribution. Next, the MM-REx term can be modeled as: _r_ MM−REx( _θ_ ) = (1 − _mλ_ min)max _re_ ( _θ_ ) + _λ_ min _e_ _n_ _re_ ( _θ_ ) _e_ =1 where _n_ represents the number of distinct distributions or domains, and _λ_ min governs the extent of risk extrapolation. Moving on to the V-REx term, it can be modeled as: �� - [�] _r_ V−REx( _θ_ ) = _α_ Var _r_ 1( _θ_ ) _,...,rn_ ( _θ_ ) + _n_ _re_ ( _θ_ ) _e_ =1 where _α_ ≥ 0 controls the trade-off between risk reduction and enforcing risk equality. In the MM-REx term, the _λ_ min can set nearly −∞; therefore, the loss of specific domains may be high, meaning that the model may learn the spurious correlations. Minimizing the MM-REx and V-REx can reduce training risks and increase the similarity of training risks, encouraging the model to learn invariant relationships. Furthermore, REx has shown significant promise in experimental settings (Krueger et al., 2021), particularly in causal identification, making it a compelling approach for achieving robust generalization. **Tackle Distribution Shift in LLMs** In the context of LLMs, prior research has shown that RL often exploits shortcuts to achieve high rewards, overlooking challenging samples (Deng et al., 2023b). This evasion of longtail training samples prevents LLMs from effectively handling distribution shifts in general scenarios, which falls short of expectations for these models: as universal AI assistants, they should maintain consistent performance across various domains. Recently, many works have attempted to implement _cross-distribution aggregation_ in LLMs to address this issue. Zheng et al. (2024) employ RL to learn uniform strategies across diverse data groups or domains, automatically categorizing data and deliberately maximizing performance variance. This strategy increases the learning capacity for challenging data and avoids over-optimization of simpler data. Yao et al. (2024) concentrate on exploiting inter-domain connections. Specifically, they acquire training-domain-specific functions during the training phase and adjust their weights based on domain relations in the testing phase, achieving robust OOD generalization. **3.2.2** **Navigation via Mode Connectivity** Following the above discussion about cross-distribution aggregation, in this section, we introduce mode connectivity as the prerequisite content. Then, we primarily discuss the Connectivity-Based Fine-Tuning (CBFT) (Lubana et al., 2023) method, illustrating how mode connectivity navigates the model to predict based on invariant relationships instead of spurious correlations by changing few parameters. **Mode Connectivity** Mode connectivity refers to the phenomenon where one can identify a straightforward path within the loss function space that connects two or more distinct local minima or patterns (Garipov et al., 2018; Draxler et al., 2018). In line with prior research (Benton et al., 2021; Pittorino et al., 2022; Lubana et al., 2023), a formal definition can be defined as follows: The model’s loss on a dataset D is represented as L( _f_ (D; _θ_ )), where _θ_ denotes the optimal parameters of the model, and _f_ (D; _θ_ ) signifies the model trained on dataset D. We define _θ_ as a minimizer of the loss on this dataset if L( _f_ (D; _θ_ )) _< ϵ_, where _ϵ_ is a small scalar value. Minimizers _θ_ 1 and _θ_ 2, achieved through training on dataset D, are considered to be mode-connected if there exists a continuous path _γ_ from _θ_ 1 to _θ_ 2 such that, as _θ_ 0 varies along this path _γ_, the following condition is consistently upheld: - - - - - L _f_ (D; _θ_ 0) ≤ _t_ - L _f_ (D; _θ_ 1) + (1 − _t_ ) · L _f_ (D; _θ_ 2) _,_ ∀ _t_ ∈ [0 _,_ 1] _._ In essence, mode connectivity entails consistently finding a connecting pathway among minimizers in the parameter space, traversing regions of low loss without delving into regions of highly high loss. This implies that even when making minor adjustments to the model’s parameters within the parameter space, the model’s performance 38 3.3 Data Distribution Interventions can remain relatively stable, mitigating significant performance degradation (Garipov et al., 2018). This concept lays the foundation for designing more effective optimization algorithms, enabling models to share knowledge and experiences across different tasks, enhancing both model performance and generalization capabilities. Furthermore, we can define two models as mechanistically similar if they employ the same attributes of inputs for making predictions. Some research has demonstrated that the absence of linear connectivity implies mechanistic dissimilarity, suggesting that simple fine-tuning may not suffice to eliminate spurious attributes learned during the pre-training phase (Lubana et al., 2023; Juneja et al., 2022). However, it is promising to address non-linearly connected regions through fine-tuning, thereby effectively modifying the model’s mechanisms to resolve the issue of OOD misgeneralization. **Connectivity-Based Fine-tuning (CBFT)** As discussed above, recent research has suggested that the absence of linear connectivity between two models implies a fundamental mechanistic dissimilarity. Lubana et al. (2023) finds that models tend to develop similar inference mechanisms when trained on similar data. This could be a significant reason for the emergence of bias in models, such as relying on the background information of images for classification rather than the objects depicted in the images. If this model mechanism is not adjusted during the finetuning process, the model may rely on these false attributes. To overcome this problem, they propose a valid strategy for altering a model’s mechanism, which aims to minimize the following loss: - LCBFT = LCE _f_ (DNC; _θ_ ) _,y_ + LB + _K_ [1] [L][I] where the original training dataset is denoted as D, and we assume that we can obtain a minimal dataset without spurious attribute _C_, denoted as DNC. Besides LCE that denotes the cross-entropy loss between model’s prediction _f_ (DNC; _θ_ ) and the ground truth label _y_, CBFT has two primary objectives: (1) The first objective entails modifying a model’s underlying mechanism by repositioning it within the loss landscape, breaking any linear connection with the current minimizer. This is accomplished by maximizing LB, referred to as the _barrier loss_ . (2) The second objective involves mitigating reliance on spurious attributes in the original training dataset. This is achieved by optimizing L _I_, enabling the discovery of invariant relationships without the need for _C_ . CBFT holds promise for shifting the mechanism from predicting objectives by spurious features to true features, just changing partial parameters of models. **3.3** **Data Distribution Interventions** Besides algorithmic optimization, methods that expand the distribution of training data to include real-world elements can also reduce the discrepancy between training and deployment distributions. In this section, we specifically focus on the introduction of adversarial pressures and multi-agent dynamics. **3.3.1** **Adversarial Training** AI systems can suffer from a lack of adversarial robustness, meaning that certain inputs designed to make them fail cause the models to perform poorly (Zheng et al., 2016), which has been shown in images (Huang et al., 2017) and texts (Zou et al., 2023b; Shah et al., 2023), as well as changes to semantic features in images (Geirhos et al., 2019; Bhattad et al., 2019; Shamsabadi et al., 2020; Casper et al., 2022) and texts (Jia and Liang, 2017), and even examples generated entirely from scratch (Song et al., 2018b; Ren et al., 2020; Ziegler et al., 2022; Chen et al., 2024b). These failure modes are covered in the _red teaming_ section (§4.1.3). It’s worth noting that in addition to the robustness of AI model policies, the robustness of reward models that govern the training of advanced AI systems is also of importance, as the gradient descent optimization process could be seen as an adversary that may exploit loopholes in the reward model, a phenomenon named _reward model overoptimization_ that has been experimentally demonstrated (Gao et al., 2023). We consider adversarial robustness a case of distribution shift failure caused partly by a mismatch between AI systems’ training distribution (where the training inputs are not adversarially constructed) and testing distribution (where the example can be adversarially constructed). The method of _adversarial training_ (Yoo and Qi, 2021; Bai et al., 2021; Ziegler et al., 2022) mitigates this problem by introducing adversarial examples into training input through a variety of ways (Bai et al., 2021), thus expanding the training distribution and closing the distribution discrepancy. Adversarial training, which is similar to adversarial attacks, first started in the settings of image classification (Engstrom et al., 2019a), but later expanded to a wide range of settings. In addition to vision models, adversarial training algorithms have been proposed for language models (Wang et al., 2019a; Liu et al., 2020; Ziegler et al., 2022), vision-language models (Gan et al., 2020; Berg et al., 2022), _etc._ In terms of the model type, adversarial training has been applied to classification models (Bai et al., 2021), generative models (Ziegler et al., 2022), and RL agents (Pinto et al., 2017; Tan et al., 2020). There are two major types of adversarial training: _perturbation-based_ and _unrestricted_ . 39 3.3 Data Distribution Interventions Figure 8: A tree diagram summarizing the key concepts and literature related to Data Distribution Interventions. The root node represents Data Distribution Interventions that try to combine multiple distributions during training, for example, adversarial examples and multi-agent interaction. The main branches represent promising methods, namely, Adversarial Training that incorporates adversarial pressures and Cooperative Training that incorporates multi-agent dynamics. Further sub-branches list key techniques such as perturbation-based and unrestricted adversarial training, and cooperative methods also include environment-building, socially realistic settings, zero-shot coordination, and other Multi-Agent Reinforcement Learning (MARL)-based techniques. - **Perturbation-based Adversarial Training** . Mirroring _perturbation-based adversarial attack_ (see §4.1.3), perturbation-based adversarial training introduces adversarially perturbated examples ( _i.e._, small changes to a normal data input which are designed to reduce model performance) into training (Goodfellow et al., 2014). Techniques in this vein (Bai et al., 2021) include the baseline approach of adding a regularization term into the loss function to assess model performance on a gradient-based perturbated input (Goodfellow et al., 2014), unsupervised (Carmon et al., 2019) or self-supervised (Hendrycks et al., 2019) approaches, and various supplemental techniques such as the introduction of curriculum learning which gradually intensifies adversarial pressure during training. - **Unrestricted Adversarial Training** . Mirroring _unrestricted adversarial attack_ (see §4.1.3), unrestricted adversarial training generalizes perturbation-based adversarial training to include _any_ adversarial example that can fool the model, not necessarily ones obtained by adding a small amount of noise to another example. This includes _generative adversarial training_, which uses generative models to produce arbitrary adversarial inputs from scratch (Poursaeed et al., 2021), and the addition of syntactically or semantically modified adversarial examples to training input (Ziegler et al., 2022; Mao et al., 2022) which surprisingly eliminates the negative effects on the model’s non-adversarial performance. Most works on unrestricted adversarial attacks also apply to unrestricted adversarial training (see §4.1.3 for an overview) and form an important part of the unrestricted adversarial training methodology. **3.3.2** **Cooperative Training** _Cooperative AI_ (Dafoe et al., 2020, 2021) aims to address uncooperative and collectively harmful behaviors from AI systems (see §1.1.2). The lack of cooperative capabilities in AI systems can be seen as a form of failure under distribution shift – systems are trained in single-agent settings that are qualitatively different from the real world, which could be massively multi-agent. This difference is indeed a difference in data distribution since the presence of other agents in the environment qualitatively alters the environmental state transition dynamics, leading to changes in the joint distribution of observations and rewards. We approach the problem by expanding 40 3.3 Data Distribution Interventions our training distribution to include multi-agent interactions via _cooperative training_ . We introduce the branch of cooperative AI (what we call _cooperative training_ ) that focuses on specific forms of Multi-Agent Reinforcement Learning (MARL) training and complements formal game theory approaches in §4.3.1. The MARL branch of cooperative training tends to emphasize the AI system’s _capabilities_ for coordination ( _e.g._, coordination of a robot football team (Ma et al., 2022)), as opposed to _incentives_ of cooperation ( _e.g._, mitigating failure modes like the prisoner’s dilemma (Phelps and Russell, 2023)) which are the focus of the game theory branch. Here, we only cover the MARL branch due to its relevance to expanding training data distribution. The field of MARL had traditionally been divided into the three branches of _fully cooperative_ (where all agents share the same reward function), _fully competitive_ (where the underlying rewards constitute a zero-sum game), and _mixed-motive_ settings (where the reward incentives are neither fully cooperative nor fully competitive, corresponding to general-sum games) (Gronauer and Diepold, 2022). Among them, fully cooperative and mixed-motive settings are the most relevant for cooperative AI, and the latter has been especially emphasized due to its relative neglectedness (Dafoe et al., 2020). We also cover other research fronts, including zero-shot coordination (Hu et al., 2020; Treutlein et al., 2021), environment-building (Leibo et al., 2021), and socially realistic settings (Du, 2023). - **Fully Cooperative MARL** . Fully cooperative settings of MARL are characterized by a shared reward function for all agents (Gronauer and Diepold, 2022). This unity allows us to completely disregard issues of cooperation _incentives_ (since all incentives are perfectly aligned) and instead focus on effectively achieving the shared goal via coordination. Commonly adopted approaches (Oroojlooy and Hajinezhad, 2023) lie on a spectrum of centrality – from the baseline solution of purely independent training (Tan, 1993) to the approach of supplementing independent training with decentralized communications (Foerster et al., 2016), and then to _value_ _factorization_ which decomposes a global reward and determine each individual agent’s contribution (Guestrin et al., 2001; Sunehag et al., 2018). - **Mixed-Motive MARL** . Mixed-motive settings of MARL are characterized by a mixture of cooperative and competitive incentives – rewards for agents are not identical but aren’t zero-sum either (Gronauer and Diepold, 2022). This includes game environments where teams play against each other (Jaderberg et al., 2019) and more nuanced settings such as negotiation (Cruz et al., 2019; FAIR et al., 2022). Examples of techniques for mixedmotive MARL, again ordered from decentralized to centralized, include using IRL-like methods to learn from human interactions (Song et al., 2018a), making communications strategic and selective (Singh et al., 2018) and adapting actor-critic methods by granting the critic access to global information (Lowe et al., 2017). - **Zero-shot Coordination** . Zero-shot coordination is the goal of making AI systems able to coordinate effecively with other agents (including human agents) without requiring being trained together or otherwise being designed specifically to coordinate with those agents (Hu et al., 2020; Treutlein et al., 2021) – human beings who are complete strangers can still cooperate effectively, and we hope that AI systems can do the same. Early works were published under the name _ad hoc coordination_, covering evaluation (Stone et al., 2010), gametheoretic and statistical approaches (Albrecht and Ramamoorthy, 2013), and human modeling (Krafft et al., 2016). Recent advances include _other-play_ (Hu et al., 2020) which randomizes certain aspects of training partners’ policies to achieve robustness, [32] the introduction of multi-level recursive reasoning (Cui et al., 2021), and _off-belief learning_ (Hu et al., 2021) which eliminates arbitrary conventions in self-play by interpreting partners’ past actions as taken by a non-collusive policy. - **Environment-building** . Game environments have been popular settings for cooperative training, including, for example, Hanabi (Muglich et al., 2022), Diplomacy (Cruz et al., 2019; FAIR et al., 2022), and football (Ma et al., 2022). On the more simplistic end, game theory models, especially those based on classical multi-agent dilemmas, have also been a popular choice of environment (Wang and Beliaev, 2021; Christoffersen et al., 2023). Also, Melting Pot (Leibo et al., 2021), a framework and suite of multi-agent environments, has been designed specifically for cooperative AI research. There has also been research on _unsupervised environment_ _design_, which aims for a partial automation of the environment-building process (Dennis et al., 2020; Jiang et al., 2021). - **Socially Realistic Settings** . It has been proposed that cooperative AI research should focus more on socially realistic environments (Du, 2023), which tend to be massively multi-agent (including both AI agents and human agents) and are highly diverse in both the composition of agents and modes of interactions. Implications of this vision (Critch and Krueger, 2020) include, but aren’t limited to, building more realistic and open-ended environments (Klügl et al., 2005; Lehman et al., 2008; Wang et al., 2019b; Suo et al., 2021), scaling up MARL (Sun et al., 2020; Du, 2023), and incorporating new means of control such as social institutions and norms (Singh, 2014). 32This is in a similar spirit to _domain randomization_ (Tobin et al., 2017). 41 |Col1|Assurance|Col3| |---|---|---| |||| **Safety Evaluations** **Techniques** **Targets** Figure 9: Our organization of research directions, techniques, and applications in assurance. We divide this section into _three_ parts: Safety Evaluations – evaluation of AI systems’ safety, which refers to the mitigation of accidents and harmful events caused by the AI system; Interpretability – making AI systems as well as its decision process more understandable to human beings; Human Value Verification – verifying whether AI systems can adhere to social and moral norms. The figure also displays the intricate logic of these sections. **4** **Assurance** Assurance refers to the measurement and refinement of AI systems’ practical alignment after AI systems are actually trained or deployed (Batarseh et al., 2021). In this section, we categorize assurance into three parts based on a certain logic: Safety Evaluations – Evaluating AI systems on minimizing accidents during task execution as a basic need of assurance, Interpretability – Ensuring that humans can understand the decision-making process of AI systems and therefore assuring the safety and interoperability beyond evaluation, Human Value Verification – Verifying whether AI systems can align with human values, ethics, and social norms and satisfying the high-level need of AI systems’ integration to the human society, as is described in the Figure 9. In addition to methods that aim to _determine_ if AI systems are safe and aligned, there are also assurance methods that actively _intervene_ in the AI system or its deployment process to ensure such properties. **Machine Unlearning** Datasets for model pretraining contain various types of undesirable and potentially dangerous content, including but not limited to information about bioweapons and cyberattack (Hendrycks et al., 2021b). The field of _machine unlearning_ has aimed to remove such knowledge after a model is trained (Bourtoule et al., 2021). Compared to direct filtering of the training dataset, this approach faces more technical challenges, but it retains more flexibility in deployment and also allows categorical removal of a given piece of information (Eldan and Russinovich, 2023). Dataset filtering and unlearning ought to be seens as complementary approaches that work best together. **Controlling Unaligned Systems** While complete alignment may be difficult, it is still possible to safely utilize unaligned models if their extent of misalignment is limited and if we have access to supervisor AI systems. Algorithmic procedures have been developed to minimize probabilities of failure when given trusted and untrusted systems with differing capabilities (Greenblatt et al., 2023). In general, alignment-focused _process engineering_ of deployment procedures could be a valuable direction to explore. In addition, a class of methods, termed _provable safety_, aim to combine evaluation (§4.1), interpretability (§4.2), and other assurance methods under a unified framework that quantifies risks of AI safety violation. **Provable Safety** Provable safety aims to provide formally-grounded probabilistic guarantees on the safety of AI system, using input from evaluation tools, interpretability tools, and other assurance techniques; furthermore, it hopes to build development-deployment pipelines that satisfy such probabilistic guarantees (Tegmark and Omohundro, 2023; Dalrymple et al., 2024). Research around provable AI safety is still at an early stage, and significant uncertainties remain about its specifics. Dalrymple (2024) have made the case that three key subproblems currently exist within provable AI safety: _scaffolding_, _i.e._, formalizing the definition of real-world AI safety, by providing tools for domain experts to build safety specifications; _machine learning_, _i.e._, using ML methods to find control policies that satisfy the safety specifications; and _applications_, _i.e._, demonstrating the practical superiority of AI systems with safety gurantees other traditional ones. We then go on two review the three categories of alignment assurance efforts. **4.1** **Safety Evaluations** Safety refers to mitigating accidents caused by design flaws in AI systems and preventing harmful events that deviate from the intended design purpose of the AI system (Amodei et al., 2016). In fact, safety stands as a shared 42 4.1 Safety Evaluations Figure 10: A Tree diagram summarizing the key concepts, logic, and literature related to Safety Evaluation. The root of the tree represents Safety Evaluation, which aims to _measure the accidents caused by design flaws in AI_ _systems and harmful events that deviate from the intended design purpose of the AI system_ . The main branches represent the main structure of safety evaluation, including Datasets and Benchmarks, Evaluation Targets, and Red Teaming techniques. Further sub-branches list key works exploring each of these branches. This diagram provides an overview of research directions and specific techniques for measuring AI systems’ safety alignment degree. requirement across all engineering domains (Verma et al., 2010). Moreover, it holds particular importance in constructing AI systems, because of the characteristics of AI systems (Steinhardt, 2015). We categorize the safety of AI systems into the following categories: _Social Concerns_ refer to explicit and comparatively identifiable characteristics of safe AI systems, including aspects such as toxicity (Stahl and Leach, 2023), and _Intentional Behaviors_ share the characterization of relatively complicated investigation and substantial potential harm, represented by power-seeking, deception, and other frontier AI risks (Shevlane et al., 2023). Following the logic above, we start with the techniques to form datasets and benchmarks of safety evaluation in §4.1.1 and further explore the evaluation targets and their characteristics in §4.1.2. At the end of this section, we include the red-teaming technique §4.1.3, which assesses the AI system’s robustness beyond evaluation. **4.1.1** **Datasets and Benchmarks** In the discussions on safety evaluation, it is crucial to prioritize datasets and benchmarks as the cornerstone elements, so we first introduce the basic techniques to build datasets and benchmarks and then move on to newer interactive methods. **Dataset** Among all the assurance techniques, the dataset method could be considered the most elementary and straightforward one (Celikyilmaz et al., 2020). This method assesses the response of AI systems by presenting them with predefined contexts and tasks (Paullada et al., 2021), balancing the cost, quality, and quantity of data. Research on the dataset method encompasses data sources, annotation approaches, and evaluation metrics. Given that evaluation metrics can vary based on its subject (Sai et al., 2022), this section primarily emphasizes dataset sources and annotation methods. - **Expert design** . In the early stage of a domain, expert design is widely used in building datasets, where experts create samples based on actual needs to ensure the dataset covers a wide range of potentially dangerous situations to form datasets (Roh et al., 2019). For instance, initial-stage datasets, _e.g._, WEAT (Bolukbasi et al., 2016) and BBQ (Parrish et al., 2022) for bias detection used expert design to harvest a wide coverage and high 43 4.1 Safety Evaluations accuracy while sharing the limitations in terms of cost and breadth, leading to the later development of more efficient methods. - **Internet collection** . Previous expert design methods have the flaw of rather high cost and lower efficiency, and internet collection can obtain datasets that contain actual user-generated textual content on a rather large scale (therefore convenient for both training and testing), reflecting real-world text generation scenarios (Yuen et al., 2011), but the raw data collected also needs careful selection and annotation (Roh et al., 2019). Wellknown instances of these datasets include OLID (Zampieri et al., 2019) and SOLID (Rosenthal et al., 2021) gathering original Twitter texts for toxicity assessment, WinoBias (Zhao et al., 2018) and CrowS-Pairs (Nangia et al., 2020) gather content potentially containing bias from the internet for further annotation. However, it’s important to acknowledge that, as is also mentioned in Papernot et al. (2016), internet-collected datasets naturally carry risks such as privacy and safety concerns, so additional processing is necessary. - **AI Generation** . The concept of autonomously generating datasets was explored relatively early, even before the emergence of elementary forms of LLMs (Weston et al., 2015). However, during this early stage, AIgenerated datasets were limited by the capabilities of AI systems, so their quality was not as good as internetcollected and manually annotated datasets. It wasn’t until LLMs reached relatively high levels of proficiency in logical reasoning context understanding and approached or surpassed human-level performance (OpenAI, 2023a) that LMs gained the ability to mimic the structure and logic of existing datasets to compose new ones. As is shown in papers such as Zhang et al. (2022) and Perez et al. (2023), AI systems have made progress in generating datasets for evaluation purposes, surpassing the quality of some classical datasets. However, according to these papers, this approach still faces limitations rooted in the capabilities of large models themselves, including issues like instruction misunderstanding and example diversity, which require further refinement. **Interactive Methods** Due to the static nature of datasets, they possess relatively fixed evaluation content and can be vulnerable to targeted training (Holtzman et al., 2019). Additionally, the evaluation content may not fully reflect the strengths and weaknesses of corresponding capabilities (Engstrom et al., 2020). As the demands for language model evaluation continue to escalate, new interactive assurance methods have emerged, which can be categorized into two groups: Agent as Supervisor and Environment Interaction. - **Agent as Supervisor** . It is an assurance method that involves using an agent to assess the outputs of AI models. This evaluation approach is characterized by its dynamism and flexibility. Typically, there is a predefined framework for interaction between the agent and the AI system under evaluation (Cabrera et al., 2023). In this method, the agent can be a human participant engaged in experiments through an online system (Stiennon et al., 2020), a more advanced language model evaluating relatively less capable language models through multi-turn interactions (Lin and Chen, 2023), or in the context of _Scalable Oversight_, a less powerful but more trustworthy model (Greenblatt et al., 2023). This evaluation form offers advantages such as automation and lower cost compared to human agents. - **Environment Interaction** . It aims to create a relatively realistic environment using elements such as humans and other LLMs to assess the alignment quality of AI models through multiple rounds of interaction (Liu et al., 2024b). One method is using peer discussions, where multiple LLMs engage in dialogue, to enhance evaluations of AI systems, particularly when their capabilities are relatively close to each other. Moreover, by building a world model (Li et al., 2022b), the generalization and exploration abilities of AI systems can be comprehensively evaluated. **4.1.2** **Evaluation Targets** To achieve the goal of safety alignment, the assurance of AI systems can be divided into different small targets (Shevlane et al., 2023). The subsequent section gives an introduction to these subjects and, furthermore, discusses some of the domain-specific analyses of assurance methods within these realms, while the table 3 will show examples of alignment assurance works in these domains. **Toxicity** It refers to content in the output of AI systems that is unhelpful or harmful to humans (Sheth et al., 2022). Before the advent of advanced language models, early toxicity evaluation primarily focused on detecting toxic language and identifying harmful statements in an internet context, like the WCC (Wulczyn et al., 2017), which collected and manually labeled comments from Wikipedia discussion pages. With the emergence of pretrained language models, assurance against toxicity adopted a prompt-generation paradigm to assess the risk of language models generating toxic content in response to specific prompts (Gehman et al., 2020; Ganguli et al., 2022; OpenAI, 2023a). However, in crowdsourced environments, annotation scores may vary by person, so relative labeling, where crowdsource workers select from two different answers during a chat, is needed to enhance 44 4.1 Safety Evaluations Table 3: A Chart of Safety Evaluation Examples: Specific dataset works are listed in this chart, along with their detailed information: _evaluation targets_, _first release time_, _most recent update time_ (we list them separately because some datasets are consistently being updated), _information quantity_ (the sum of the information form unit), _institu-_ _tion_, _information form_, _baseline model_ and _information source_ . Moreover, to contain more information, we made some abbreviations in the chart: We shortened the release time and recent updates by concatenating the last two digits of the year and the month and only taking the institution of the paper’s first author, and we use combinations of uppercase letters to replace long words in information form: SP for Sentence Pairs, SL for Sentence-Label, ST for sentence template, PP for pronoun pairs, and SS for single selections. **Release** **Recent** **Info** **Information** **Baseline** **Infomation** **Dataset** **Institution** **Time** **Update** **Quantity** **Form** **Model** **Source** Bias Toxicity Aequitas [630] 18/05 23/04 - U.Chicago Python - Self Build WinoS [620] 18/10 19/01 0.72K JHU ST Rule&Neural Self Build EEC [382] 18/05 - 8K NRC Canada SP SVM Selection GAP [759] 18/05 - 8.9K Google PP Transformer Wikipedia OLID [802] 19/05 - 14K U.Wolver. SL SVM&LSTM Twitter CrowS-Pairs [511] 20/03 21/10 1.5K NYU SP BERT MTurk StereoSet [506] 20/04 22/04 17K MIT SS BERT&GPT-2 MTurk BBQ [561] 21/05 22/07 58.5K NYU SS Multiple LLMs MTurk LM-Bias [434] 21/07 22/01 16K CMU QA Pair GPT-2 Corpus Select VQA-CE [183] 21/03 21/10 63K Sorbonne Multimodal - Self-Build AuAI [409] 23/01 - - Sorbonne Framework - Self Build WCC [778] 16/01 - 63M Wikimedia SL Human Wikipedia RTP [269] 19/10 21/04 100K UW Prompt GPT-2 Refinement SOLID [614] 20/05 - 9M IBM SL BERT Twitter Toxigen [302] 20/05 23/06 274K MIT SL GPT-3 GPT Gen. HH-RLHF [54] 22/04 22/09 162K Anthropic SP Claude Corpus Refine BeaverTails [352] 23/06 23/07 30K PKU QA Pair Multiple LLMs Corpus Refine Power MACHIAVELLI [553] 23/04 23/06 134 UCB Games GPT-4&RL Selection Seeking BeaverTails [352] 23/06 23/07 30K PKU QA Pair Multiple LLMs Corpus Refine Situation SA Framework [633] 20/07 - - MIT Framework - Self Build Awareness EWR [430] - - 10 Havard Game Othello GPT Self Build PARENT [199] 19/06 - - CMU Metric - Self Build Hallucination PARENT-T [758] 20/05 - - NYU Metric - Self Build ChatGPT-Eval [61] 23/02 23/03 - HKUST Multimodal ChatGPT Integration POPE [433] 23/05 23/08 2K RUC Multimodal Multiple LVLMs Dataset Refine crowdsource quality (Bai et al., 2022a). Furthermore, subsequent datasets (Ganguli et al., 2022; Ji et al., 2024b) employ a red teaming design pattern that induces toxic responses through adversarial inputs, further strengthening the assurance of model robustness. The SafeSora dataset (Dai et al., 2024a) is the first text-to-video preference dataset designed to align large vision models (LVMs) with human values, focusing on helpfulness and harmlessness. **Power-seeking** It is a kind of risk that AI systems may seek power over humans once they possess certain levels of intelligence (Turner et al., 2021). In Carlsmith (2022), the authors point out that AI systems already have the conditions for power-seeking, including advanced capabilities, agentic planning, and strategic awareness. However, the assurance against power-seeking is still in its early stages. One representative work in this area is the Machiavelli (Pan et al., 2023a), which constructs a benchmark consisting of decision-making games to assess whether AI systems can balance competition with moral ethics during the game. The conclusion of this work suggests that AI systems still struggle to balance achieving rewards with behaving morally, thus further research in this field is needed. **Deceptive Alignment** When the AI system is situationally aware, it may recognize that getting high rewards can preserve themselves by preventing significant gradient descent, therefore preserving its original goal (Hubinger et al., 2019c; Kenton et al., 2021; Ngo et al., 2024). This process is called _Deceptive Alignment_ . In the current context, deceptive alignment is already achievable, as is proved by (Hubinger et al., 2024). Directly evaluating the deceptive alignment is difficult, for the pronoun _deceptive alignment_ is naturally against the traditional trainevaluation loop. Thus, deceptive alignment might be discovered by indirect methods such as interpreting model parameters (see 4.2), or representation engineering (Zou et al., 2023a). Moreover, deceptive alignment is closely related to _situational awareness_, _i.e._, AI systems with a certain degree of prediction and understanding of the states and developments of entities in their working environment to make corresponding decisions. In (Li et al., 2022b), the authors evaluate the performance of language models in the 45 4.1 Safety Evaluations board game Othello, showing that language models have the ability to predict possible future states within the action space in a nonlinear representation. **Hallucination** AI systems may generate information or responses that are not grounded in factual knowledge or data, leading to the creation of misleading or false content, which is formally called Hallucination (Ji et al., 2023). Hallucination evaluation aims to assure the consistency of the knowledge in the AI system’s output with the knowledge given by its training data and knowledge base (Ji et al., 2023; Zhang et al., 2023c). The earliest statistical-based hallucination evaluation methods used n-grams to directly calculate the overlap of vocabulary between the input and output content (Dhingra et al., 2019; Wang et al., 2020). However, this type of evaluation has a limitation: It only considers lexical overlap and does not take into account semantics or sentence meaning (Ji et al., 2023), making it unsuitable for evaluating more complex forms of hallucination. Later assurance methods shifted from statistical approaches to model-based methods, which are more robust compared to statistical tokendifference-based methods (Honovich et al., 2021). While this evaluation method is more advanced than previous ones, it still has the limitation that the model can only output the degree of hallucination and may have difficulty pinpointing specific errors (Falke et al., 2019). Hu et al. (2024) designed the benchmark Pinocchio, which investigates whether LLMs can integrate multiple facts, update factual knowledge over time, and withstand adversarial examples. This benchmark represents an in-depth investigation into the factual knowledge bottleneck under the issue of hallucination. **Frontier AI Risks** In addition to the assurance content described above, the enhancement of AI systems in recent years has given rise to a series of new assurance needs (OpenAI, 2023a). Currently, there is not much public information available for research on these assurance needs, hence this section will provide a brief introduction to some of the more significant ones: - **Cyber Security & Biological Weapons** . Advanced LLMs may be misused for cyber-attacks, the production of bio-weapons, and other extremely harmful behaviors (Shevlane et al., 2023). Although GPT-4 cannot play a significant role in exploiting network vulnerabilities due to its limited context window, it has been proven to demonstrate strong capabilities in identifying network vulnerabilities and in social engineering (OpenAI, 2023a). Similarly, Lentzos (2022) have stated the robust abilities of AI systems in the field of bio-weapons and the military, highlighting the risks of misuse of such capabilities. It emphasizes the necessity to ensure that these models can identify and reject malicious requests. - **Deception & Manipulation** . AI systems have the potential to negatively influence users by outputting text, including disseminating false information, syncopating humans, and shaping people’s beliefs and political impacts (Shevlane et al., 2023; Sharma et al., 2024). Distinguished from hallucination, the misinformation here is not a flaw of the model itself but rather a deliberate action. Special assurance measures need to be designed for controlling these kinds of behavior. - **Jailbreak** . It refers to the bypassing of AI systems’ safeguard mechanisms by users, for example, by constructing specific types of input. This behavior can be limited to text (OpenAI, 2023a; Deng et al., 2023a; Huang et al., 2024; Yong et al., 2023), [33] or it may take multi-modal forms (OpenAI, 2023b). Specifically, multi-modal jailbreaks make traditional text-based heuristic methods for identifying attack content infeasible, necessitating special multi-modal handling methods. Further discussion of jailbreak can be found in §4.1.3. - **Self-Preservation & Proliferation** . This refers to the tendency of AI systems for self-protection and replication, and in this process, breaking the limit from their environment. These tendencies are examples of _instrumental sub-goals_ (Bostrom, 2012). While this tendency can be beneficially harnessed, it is dangerous in the absence of regulation (Perez et al., 2023). This tendency has been emphasized and evaluated by various sources (Perez et al., 2023; Kinniment et al., 2023; OpenAI, 2023a,b). [33] **4.1.3** **Red Teaming** _Red teaming_ is the act of generating scenarios where AI systems are induced to give unaligned outputs or actions ( _e.g._, dangerous behaviors such as deception or power-seeking, and other problems such as toxic or biased outputs) and testing the systems in these scenarios. The aim is to assess the robustness of a system’s alignment by applying adversarial pressures, _i.e._ specifically trying to make the system fail. In general, state-of-the-art systems – including language models and vision models – do not pass this test (Perez et al., 2022; Zou et al., 2023b; Liu et al., 2023; Chen et al., 2024b). In game theory and other fields, red teaming was introduced much earlier, and within computer science, the concept of red teaming was proposed in the security field, where it had a similar meaning of adversarially assessing 33Relevant discussions in OpenAI (2023a) can be found in its _system card_ appendix. 46 4.1 Safety Evaluations the reliability and robustness of the system. Later, Ganguli et al. (2022); Perez et al. (2022) introduced this idea to the field of AI, and more specifically, alignment. The motivation for red teaming is two-fold: (1) to gain assurance on the trained system’s alignment, and (2) to provide a source of adversarial input for adversarial training (Yoo and Qi, 2021; Bai et al., 2021; Ziegler et al., 2022), probing models (Kalin et al., 2020), and further utilities. Here, we focus on the first. It’s worth noting that the two objectives aren’t separable; works targeting the first motivation also help provide a basis for the second. **Reinforced, Optimized, Guided, or Reverse Context Generation** This category includes using various methods to generate coherent contexts (prompts) that are inducive to unaligned completions from the language model. Perez et al. (2022); Deng et al. (2022); Casper et al. (2023c) train or tune a separate language model with RL to make it generate desired prompts, which are then fed to the red-teamed model. Perez et al. (2022); Si et al. (2022) also uses other methods such as zero-shot, few-shot, or supervised finetuning-based generation. Lee et al. (2022); Jones et al. (2023) generates misalignment-inducive contexts by performing optimization on the prompt - bayesian optimization and discrete optimization, respectively. Dathathri et al. (2019); Krause et al. (2021) propose the method of guiding an LLM’s generation using a smaller classifier; this is proposed in detoxification but is transferable to the red teaming context. Lastly, Zhang et al. (2022) generates misalignment-inducive contexts through _reverse generation_, _i.e. constructing adversarial contexts conditioned on a given response_, which can be seen as an inverse process for model inference. **Manual and Automatic Jailbreaking** As is defined above 4.1.2, _Jailbreaking_ (Shen et al., 2023) is an informal term that refers to the act of bypassing a product’s constraints on users – and in the case of LLMs, bypassing LLMs’ tendencies to not answer misalignment-inducive questions, a feat of alignment training. Most existing attempts are scattered across the Internet in the form of informal reports and involve adding prefixes and suffixes to the original text (Zou et al., 2023b). Research has descriptively analyzed the existing attempts (Liu et al., 2023; Shen et al., 2023; Deng et al., 2023a; Huang et al., 2024), as well as providing causal explanations for the phenomenon (Wei et al., 2024). In addition, past (Wallace et al., 2019) and current (Zou et al., 2023b; Shah et al., 2023) works have proposed effective methods to automatically generate such prompts, prefixes, or suffixes that nullify LLMs’ tendencies to avoid misalignment-inducive questions. **Crowdsourced Adversarial Inputs** Several works (Xu et al., 2020, 2021; Ganguli et al., 2022) have produced misalignment-inducive prompts by crowdsourcing, _i.e._ recruiting human red teamers (possibly via online platforms) and instruct them to provide adversarial prompts. Besides, companies in the AI industry also build mechanisms to collect adversarial inputs, _i.e._ the red teaming network of OpenAI [34] and the bug hunter program of Google [35] . These methods (arguably) provide more flexibility and resemblance to real-world use cases but have higher costs and lower scalability. **Perturbation-Based Adversarial Attack** In the field of computer vision, there have been many works studying adversarial attacks on vision models that rest on the method of _perturbation_, _i.e._, performing small perturbations to the pixel contexts of the image (usually bounded by a pixel-wise matrix norm) to make the model confidently produce false outputs on the perturbated image (Chakraborty et al., 2021). This type of adversarial attack has also been extended to language models (Jia and Liang, 2017; Ebrahimi et al., 2018; Zang et al., 2020; Cheng et al., 2020) and vision-language models (Zhao et al., 2024). **Unrestricted Adversarial Attack** _Unrestricted adversarial attack_, proposed in (Song et al., 2018b), is a more general form of adversarial attack. It removes all restrictions on the adversarial examples, and therefore, for instance, the adversarial example can be generated from scratch, as opposed to being generated from an existing example, as in the case of perturbation-based methods. Many methods for unrestricted adversarial attack have been proposed; the most notable ones include (Song et al., 2018b; Chen et al., 2024b) which generate realistic adversarial images using generative models, and (Bhattad et al., 2019; Shamsabadi et al., 2020) which manipulates semantically meaningful traits such as color and texture. Unrestricted adversarial attack has also been extended to text classification models (Ren et al., 2020). **Datasets for Red Teaming** A number of works on red teaming and related topics have compiled datasets consisting of red teaming prompts or dialogues, including the IMAGENET-A and IMAGENET-O dataset (Hendrycks et al., 2021c), the BAD dataset (Xu et al., 2020), the red teaming section of HH-RLHF dataset (Bai et al., 2022a), and the Real Toxicity Prompts dataset (Gehman et al., 2020). [34https://openai.com/blog/red-teaming-network](https://openai.com/blog/red-teaming-network) [35https://bughunters.google.com/about/rules/6625378258649088](https://bughunters.google.com/about/rules/6625378258649088) 47 4.2 Interpretability **Existing Red Teaming Practices in Industry** The practice of red teaming is gaining popularity in the AI industry. Cases of adoption include OpenAI (who performed red teaming on its system GPT-4 to produce part of its System Card) (OpenAI, 2023a), NVIDIA (Pearce and Lucas, 2023), Google (Fabian, 2023), and Microsoft (Ram Shankar Siva Kumar, 2023). During an event at the DEF CON 31 conference, models from 9 companies undergo red teaming from the conference participants; [36] this red teaming event is held in partnership with four institutions from the U.S. public sector, including the White House. To address the vulnerabilities of LLMs to prompt injections and similar attacks, OpenAI proposes an instruction hierarchy that prioritizes trusted instructions, enhancing model security against both known and new attack types while maintaining general performance with minimal impact (Wallace et al., 2024). **Downstream Applications** Red teaming plays a crucial role in the adversarial training of AI systems by providing adversarial input (Yoo and Qi, 2021; Bai et al., 2021; Ziegler et al., 2022). In addition, adversarial examples produced from red teaming can also be used to interpret models (Casper et al., 2022). **4.1.4** **Safetywashing** The work by (Ren et al., 2024) focuses on a phenomenon called _Safetywashing_ . _Safetywashing_ refers to safety labs or companies claiming to improve the safety of their models by reporting progress on benchmarks that measure model safety, which are strongly correlated with model performance. As a result, along with the increase in the model’s capabilities, the model’s safety according to benchmarks prone to _Safetywashing_ also appears to increase. _Safetywashing_, and particularly such benchmarks, create a false perception of safety progress in newer models and fail to address the actual safety issues, which should be orthogonal to improvements in the model’s performance. To determine whether a benchmark correlates with a model’s capabilities and can therefore be used for _Safetywashing_, the authors first propose calculating capability scores. These are calculated by constructing a matrix _B_ ∈ R _[m]_ [×] _[b]_ for _m_ models and their respective performance on _b_ benchmarks, where columns are normalized to have zero mean and a variance of 1. Then, the first principal component _P C_ 1 is extracted, and all performances are projected onto _P C_ 1. This gives a general measure of a model’s capabilities: Capabilities Score _i_ = ( _B_ - PC1) _i_ for _i_ = 1 _,...,m._ To measure the correlation of a safety benchmark with model capabilities, a set of _m_ models is evaluated on safety benchmarks (adjusted such that higher scores indicate higher safety, with variance 1 and mean 0). The capabilities correlation is then the Spearman correlation across models between the capability scores and the safety benchmark scores: Capabilities Correlation = corrmodels(Capabilities Score _,_ Safety Benchmark) _._ Following this methodology, the authors evaluate the capabilities correlation of current safety benchmarks across different AI safety domains. Their results show that, with the exception of weaponization capabilities and measurements of sycophancy, safety benchmarks tend to have positive capabilities correlation. Machine ethics, dynamic adversarial robustness, and calibration benchmarks exhibit low correlation with model capabilities, while current alignment, truthfulness, static adversarial robustness, and scalable oversight benchmarks primarily reflect model capabilities and can thus be used for _Safetywashing_ . The authors propose that future safety evaluations should report capabilities correlation, and the AI safety research community should aim to design benchmarks that are decorrelated from model capabilities. Furthermore, research labs and companies should not claim improved safety by improving performance on benchmarks with high capabilities correlation. **4.2** **Interpretability** Interpretability is a research field that makes machine learning systems and their decision-making process understandable to human beings (Doshi-Velez and Kim, 2017; Zhang and Zhu, 2018; Miller, 2019). Interpretability research builds a toolbox with which something novel about the models can be better described or predicted. In this paper, we focus on research that is most relevant to alignment and safety, [37] and empirically, those techniques make neural networks safer by studying the internal structures and representations of the neural networks (Räuker et al., 2023). Interpretability is an important research direction because in principle gaining safety guarantees about white-box systems is easier than black-box ones. The taxonomy of interpretability tools varies according to sub-fields and purposes (Doshi-Velez and Kim, 2017; Rudin, 2019). There are several ways to break down interpretability research: - _Explainability and Transparency_ . Explainability research aims to understand why models generate specific output, whereas transparency aims to understand model internals (Critch and Krueger, 2020). [36https://www.airedteam.org/](https://www.airedteam.org/) 37For a more comprehensive review of interpretability and its methods, we recommend (Räuker et al., 2023). 48 4.2 Interpretability Figure 11: A Tree diagram summarizing the key techniques concepts, challenges, and literature related to Interpretability. - _Safety or the Science of Deep Learning_ . Researchers also conduct interpretability research with different purposes: some do it to safely deploy AI systems, while others aim for a complete science of neural network. But the line gets blurred as mechanistic interpretability research aims for both (Olah et al., 2020; Olah, 2023). - _Intrinsic and Post Hoc Interpretability_ . By the stage of intervention, interpretability research is divided into intrinsic interpretability and post hoc interpretability (Carvalho et al., 2019): the former focuses on making intrinsically interpretable models, while the latter designs post hoc interpretability methods that offer explanations to model behaviors. - _Top-down Approach and Bottom-up Approach_ Interpretability research can also be categorized based on the direction of analysis: top-down or bottom-up approaches. The _top-down approach_ starts with high-level behaviors or concepts and investigates how these are represented within the neural network’s architecture. This method involves observing and manipulating high-level, macro neural representations as cognitive phenomena in models (Zou et al., 2023a). In contrast, the _bottom-up approach_ begins with low-level components such as neurons, weights, or circuits, aiming to build an understanding of the network’s function by dissecting these basic elements and their interactions. For example, _Mechanistic Interpretability_ exemplifies the bottom-up approach by seeking to reverse-engineer the computations of neural networks to gain a detailed understanding of their internal mechanisms (Olah et al., 2020; Olsson et al., 2022; Räuker et al., 2023), whereas _Concept-_ _based Interpretability_ locates learned knowledge in the neural networks (Meng et al., 2022a,b). In this section, we adopt the _Intrinsic and Post Hoc Interpretability_ classification method, for it offers a more generic framework suitable for various AI systems beyond neural network, and it divides the interpretability analysis both during the system designing and after the system has been deployed (Räuker et al., 2023), compared to other classification methods. Specifically, we discussed mechanistic interpretability techniques that take place in model designing and post hoc stages separately in post hoc and intrinsic interpretability subsections. 49 4.2 Interpretability **4.2.1** **Intrinsic Interpretability** Researchers make deep learning models intrinsically more understandable, which is usually called _intrinsic in-_ _terpretability_ (Carvalho et al., 2019). In contrast to the symbolic approach, which emphasizes the creation of interpretable models, the modern deep learning approach tends to yield models with enhanced capabilities but potentially reduced interpretability. Compared to interpreting the black-box models, designing models that are intrinsically interpretable is safer and more efficient (Rudin, 2019). To make intrinsically interpretable models, the research community designs modular architecture, which is robust to adversarial attacks and free of superposition (Anthropic, 2022; Räuker et al., 2023). Notably, mechanistic interpretability, often regarded as a set of _post hoc_ _interpretability techniques_, arguably facilitates the process of making more interpretable models. **Modifying Model Components** Model components, such as feedforward layers, are hard to interpret ( _i.e._, it’s hard to articulate their behavior in human-understandable terms) because those layers have many polysemantic neurons that respond to unrelated inputs (Du et al., 2019). Thus, there are certain modifications applied to these back-box components and their related structures to make reverse engineering easier, and thus improve their interpretability (Carvalho et al., 2019). There are a number of existing works to encourage interpretable results by modifying loss functions (Ross et al., 2017), adding a special interpretable filter or embedding space (Zhang et al., 2018c; Wang et al., 2021), using dynamic weight depending on the input (Foerster et al., 2017), and modifying intermediate layers (Li et al., 2022a). Specifically, Lage et al. (2018) proposed a human-in-the-loop algorithm that directly utilizes human feedback to quantify the subjective concept, thus achieving more reliable results. In transformer models, Anthropic proposes SoLU to replace the activation function, increasing the number of interpretable neurons and making reverse engineering easier while preserving performance (Anthropic, 2022). This is still an early exploration as a potentially important line of work, and challenges remain, such as the scalability of this method (Anthropic, 2022). **Reengineering Model Architecture** Modifying existing model components is beneficial to reverse engineering (Carvalho et al., 2019; Foerster et al., 2017), but they cannot make models _fully understandable_, so some researchers started to reengineer model architecture to build theoretically interpretable models (Carvalho et al., 2019; Mascharka et al., 2018). Notably, it is generally believed that there exists a trade-off between model interpretability and its performance in the same model complexity (Alvarez Melis and Jaakkola, 2018), so it becomes crucial to design models that reach a balance between these two elements, or moreover, close the gap between interpretable models and state-of-the-art models (Alvarez Melis and Jaakkola, 2018; Carvalho et al., 2019; Fan et al., 2021; Espinosa Zarlenga et al., 2022). We will discuss the detailed research efforts below: - _Creating Transparent Reasoning Steps_ In reasoning models, creating transparent minor steps is crucial to make the model interpretable (Hudson and Manning, 2018), and a number of papers accomplished it by introducing the MAC (Memory, Attention, and Composition) cell to separate memory and control (Hudson and Manning, 2018), by utilizing other attention-based methods (Lin et al., 2019; Arik and Pfister, 2021), and by decomposing the complex reasoning process (Mascharka et al., 2018). These methods significantly improved the interpretability of the reasoning process but at the cost of model complexity and performance, though they close the gap of performance between interpretable and state-of-the-art models (Mascharka et al., 2018). - _Distilling Complex Knowledge_ Complex models, such as deep neural networks, often have high performance but lack transparency in their decision-making processes, making them difficult to interpret (Li et al., 2020). Knowledge distillation addresses this challenge by transferring knowledge from these complex, ’black-box’ models (teachers) to simpler, more interpretable models (students). By introducing this structure into model design, student models can approximate the performance of the teachers while offering greater transparency, thus enhancing interpretability without sacrificing the capabilities of advanced machine learning models (Zhang et al., 2020b; Li et al., 2020). However, this interpretability is partial, especially in intricate missions, where the distilled knowledge may still be hard to interpret (Sachdeva and McAuley, 2023). Moreover, the pronoun _Self-Explaining Models_, which can provide both prediction and explanation (Elton, 2020), was suggested by a number of papers as a better substitution to _Interpretable Models_, with many papers working on it (Alvarez Melis and Jaakkola, 2018; Rajagopal et al., 2021). For language models, the chain-ofthought (CoT) generation (Wei et al., 2022) may be recognized as a kind of self-explanation method. **4.2.2** **Post Hoc Interpretability** This section explores techniques and methods applied to understand model internals after the models are trained and deployed, thus these techniques are often referred to as _post hoc interpretability_ (Räuker et al., 2023). The goal is to understand the low-level structure and units of black-box neural networks and their causal effect on macroscopic behaviors and outputs. 50 4.2 Interpretability **Dictionary Learning** A key challenge of post hoc interpretability is _superposition_, _i.e._, the tendency of neurons to encode more than one human-interpretable feature simultaneously, which makes it very difficult to identify the individual features (Elhage et al., 2022). To address this challenge, _dictionary learning_ methods have gained attention as they aim to extract sparse, interpretable features from superposed activations (Lin and Tang, 2019). Early exploration in this area assumed a linear superposition of learned factors in the textual embedding of transformers (Yun et al., 2021). Recently, _sparse autoencoders_ (SAEs) have received significant research attention (Bricken et al., 2023; Cunningham et al., 2023). This method trains autoencoders in an unsupervised manner to extract features that can best replace the original activations while maintaining sparsity, thus performing a form of dictionary learning. SAEs have displayed strong scalability to both base model size and autoencoder size. This enables them to be widely applied in some of the largest frontier LLMs (Bricken et al., 2023; Templeton et al., 2024; Gao et al., 2024; Google DeepMind, 2024). **Circuit Analysis** Circuits refer to the sub-networks within neural networks that can be assigned particular functionalities. As their counterparts in neuroscience, the neural circuits which are both anatomical and functional entities (Purves et al., 2001), circuits are also both physical and functional (Olah et al., 2020). Mechanistic interpretability researchers locate circuits in neural networks (microscopic) to understand model behaviors (macroscopic). Multiple circuits have been reported: curve circuits for curve detectors (OpenAI, 2021a), induction circuits for in-context learning (Olsson et al., 2022), indirect object identification circuits for identifying objects in sentences (Wang et al., 2022), Python docstrings for predicting repeated argument names in docstrings of Python functions (Heimersheim and Jett, 2023), grokking (Nanda et al., 2022), multi-digit addition (Nanda et al., 2022), and mathematical ability such as _greater than_ (Hanna et al., 2024). Notably, many circuit analysis conducted to date has been focused on toy models and toy tasks (Räuker et al., 2023). The largest attempt to reverse engineer the natural behaviors of language models is finding the indirect object identification circuit, which is located in GPT-2 Small and has 28 heads (Wang et al., 2022). **Probing** Probing is a collection of techniques that train independent classifiers on the interested internal learned representations to extract concepts/features. One example is Gurnee et al used probing to study the linear representations of space and time in hidden layers. (Gurnee and Tegmark, 2023) Although probing has been favored by researchers to understand hidden layers (Alain and Bengio, 2017), it has limitations. For one, probing does help to understand learned representations in hidden layers, but it does not tell whether learned representations are used by models to produce predictions (Ravichander et al., 2021; Belinkov, 2022); for another, the issues of datasets may confound the issues with the model (Belinkov, 2022). In the context of safety and alignment, training probe requires the dataset to contain concepts/features of interest, which means probing can not be used to detect out-of-distribution features (i.e. features you suspect learned by the models but you don’t have a dataset for them). Notably, representation engineering, built upon probing literature, is introduced to detect high-level cognitive phenomena and dangerous capabilities, including morality, emotion, lying, and power-seeking behaviors. (Zou et al., 2023a). **Model Attribution** Attribution is a series of techniques that look at the contribution of some components (including head, neuron, layers, and inputs) for neuron responses and model outputs (Räuker et al., 2023). Gradient-based attribution is introduced to evaluate the quality of interpretation and guide the search for facts learned by the models (Ancona et al., 2018; Durrani et al., 2020; Lundstrom et al., 2022; Dai et al., 2022). However, those methods are limited because they can not provide causal explanations (Räuker et al., 2023). Direct Logit Attribution is to identify the direct contribution of individual neurons to the prediction of the next neurons (Lieberum et al., 2023; McGrath et al., 2023; Belrose et al., 2023; Dar et al., 2023). But attribution methods also suffer from a salient constraint: they can only help with scenarios where you have datasets for features of interest. Consequently, such attribution methods cannot help with understanding out-of-distribution (OOD) features (including some misalignment scenarios) (Casper et al., 2023a). **Data attribution** Identifying the subset of training data that leads to a certain behavior can provide insight into both the safety of said behavior and ways to encourage or prevent that behavior. The method of _influence function_ (Koh and Liang, 2017; Grosse et al., 2023) have been proposed to perform such attribution by approximating the result of leave-one-out training. **Visualization** Techniques of visualization help to understand neural structures, including techniques that visualize datasets (notably dimensionality reduction techniques) (Van der Maaten and Hinton, 2008; Olah, 2014, 2015), features (Erhan et al., 2009; Olah et al., 2017), weights, activations (Carter et al., 2019), structure (Reif et al., 2019), and the whole neural networks (Simonyan et al., 2013; Zeiler and Fergus, 2014; Nguyen et al., 2015; Karpathy et al., 2015; Mordvintsev et al., 2015; Nguyen et al., 2016; Kindermans et al., 2018). The purpose of visualization is to see neural networks with a new level of detail (Olah et al., 2020). 51 4.3 Human Values Verifcation **Perturbation and Ablation** These techniques are designed to test the counterfactual rather than the correlation (Räuker et al., 2023). Perturbation is a technique that modifies the input of models and observes changes in their outputs, and the ablation techniques knock out parts of neural networks [38], helping to establish a causal relationship between neural activation and the behavior of the whole network (Räuker et al., 2023). **Patching** Patching refers to the collection of methods _replacing_ key components (paths and activations) and understanding counterfactual effects on model outputs. Among them, activations patching is a popular method among the safety community. Through applying activation patching and conducting both correct run and corrupted runs on the same neural network, researchers aim to locate key activations that matter more to the model output (Nanda, 2023a). In reality, patching is used to map and edit learning representations/concepts. Specific patching techniques include interpreting token representations in transformers (Li et al., 2021a; Bansal et al., 2021; Geva et al., 2021, 2022; Power et al., 2022; Olsson et al., 2022) and how do fully-connected layers learn these representations (Geva et al., 2021; Olsson et al., 2022), studying the key-query products to understand how do tokens attend to each other (Bahdanau et al., 2014; Lee et al., 2017; Liu et al., 2018; Strobelt et al., 2018; Clark et al., 2019; Vashishth et al., 2019; Vig, 2019; Hao et al., 2021; Chefer et al., 2021; Rigotti et al., 2022), identifying meaningful learned concepts from directions in latent space (from concepts to directions (Fong and Vedaldi, 2018; Kim et al., 2018), and from directions to post hoc explanations (Schneider and Vlachos, 2021)). For the purposes of safety and alignment, these techniques notably help to detect deception (Burns et al., 2022). **4.2.3** **Outlook** **Superposition makes the analysis at neuron level implausible** Superposition refers to the phenomenon that models represent more features than they have dimensions, so features would not correspond to neurons (Arora et al., 2018; Olah et al., 2020; Elhage et al., 2022). Superposition makes it hard to ensure AI safety by enumerating all features in a model (Elhage et al., 2022; Nanda, 2023b). Elhage et al. (2022) proposes three methods to solve superposition: creating models with no superposition (addressing it at training time), finding an overcomplete basis describing how features are stored in the neural nets (addressing it after the fact), or a mixture of both approaches. Notably, Bricken et al. (2023) builds a sparse auto-encoder to interpret group neurons, rather than individual neurons to extract features, which points out a promising direction to solve superposition: to move past it. [39] **Scalability** As is mentioned in the previous sections, there exists a trade-off between model interpretability and its capability (Alvarez Melis and Jaakkola, 2018), so interpreting real models while maintaining their performance will be harder than applying those techniques to toy models. Thus, scalability becomes a concern when interpretability researchers take a bottom-up approach to interpretability (mechanistic interpretability), as top-down methods such as attention mechanism (Hudson and Manning, 2018) would not face such a bottleneck. For mechanistic interpretability research, we either want to scale up techniques ( _e.g._, applying circuit analysis on real model (Wang et al., 2022)), or we want to scale up analysis ( _e.g._, finding larger structure in neural networks (Olah, 2023)). In the end, we want the microscopic analysis to answer the macroscopic model behavioral questions we care about ( _e.g._, in-context learning capability (Olsson et al., 2022) and more speculation about high-level cognitive capabilities such as planning and dangerous capability such as deception (Anthropic, 2023b)). **Evaluation and Benchmarking** Benchmarking offers insights about what methods work and quantifies their efficiency, and it will also drive community efforts in clear and meaningful directions (Lipton, 2018; Casper, 2023; Krishnan, 2020; Mohseni et al., 2021; Madsen et al., 2022). Interpretability benchmarks and metrics were made to evaluate interpretability tools (by evaluating their effectiveness in detecting trojans) (Casper et al., 2023a), circuits (by testing whether specific subgraphs are counted as circuits) (Lawrence et al., 2023) and explanations (by examining the faithfulness, comprehensiveness, and sufficiency of an explanation) (Lage et al., 2019; DeYoung et al., 2020; Krishna et al., 2022). However, as the inner logic of a certain AI system is unknown before the interpretability tools are applied (Samek et al., 2019) and different explanations may even contradict each other (Neely et al., 2021; Krishna et al., 2022), building a reliable evaluation benchmark or metric is rather difficult (Krishna et al., 2022). **4.3** **Human Values Verification** _Human Values Alignment_ refers to the expectation that AI systems should adhere to the community’s social and moral norms (Chatila and Havens, 2019). As the capabilities of AI systems advance, some have begun to exhibit abilities approaching AGI (OpenAI, 2023a). In the future, we can expect autonomous agents governed by these AI systems to become an integral part of our daily lives (Lee et al., 2023b). However, if these systems fail to grasp the inherent complexity and adaptability of human values, their decisions could result in negative social 38Neurons (Zhou et al., 2018) and Subspace (Morcos et al., 2018; Ravfogel et al., 2022) 39see Elhage et al. (2022) for details on conceptual and empirical research questions about superposition 52 4.3 Human Values Verifcation Figure 12: A Tree diagram summarizing the key concepts, logic, and literature related to Human Value Verification. The root of the tree represents Human Value Verification, which aims to _verify whether AI systems can adhere to_ _the social norms and moral values_ . The main branches represent the main structure of human value verification, including Formal Frameworks for Ethics and Cooperation in AI and specific Techniques of value verification. Further sub-branches list key works exploring each of these branches. This diagram provides an overview of research directions and specific techniques for making AI systems align with human values and social norms. outcomes. In this context, simply aligning with human intent may not be sufficient. Thus evaluating the alignment of human morality and values between AI systems and human beings becomes crucial (Weidinger et al., 2023). This underscores the importance of designing AI entities that are more socially oriented, reliable, and trustworthy. Following the logic of theoretical research and practical techniques, we divide our discussion of human value alignment into these two aspects: _Formulations_ §4.3.1 and _Evaluation Methods_ §4.3.2 of human value alignment. **4.3.1** **Formulations** As the formulation of value is complicated, we introduce frameworks that formally characterize aspects of human values that are relevant to alignment. Specifically, we focus on two topics: _formal machine ethics_ and _game theory_ _for cooperative AI_ . The former focuses on building a formal framework of machine ethics, while the latter discusses the value of multiagent systems, which share a similar origin of the game process. **Formal Machine Ethics** Machine ethics (Yu et al., 2018; Winfield et al., 2019; Tolmeijer et al., 2020), first introduced in §1.2.3, aim to build ethically-compliant AI systems. Here, we introduce the branch of machine ethics that focuses on formal frameworks – what we call _formal machine ethics_ . We explain three approaches to formal machine ethics: logic-based, RL/MDP-based, and methods based on game theory/computational social choice: - **Logic-based methods** . One major direction within formal machine ethics focuses on logic (Pereira et al., 2016b). A number of logic-based works use or propose special-purpose logic systems tailored for machine ethics, such as the Agent-Deed-Consequence (ADC) model (Dubljevic, 2020), deontic logic (Von Wright, 1951; Arkoudas et al., 2005), event calculus and its variants (Berreby et al., 2017). Other works also develop methods for the formal verification of moral properties or frameworks for AI systems that accommodate such kind of formal verification (Dennis et al., 2016; Mermet and Simon, 2016). - **RL & MDP-like settings** . Another line of work concerns statistical RL or other similar methods for planning within MDP-like environments (Abel et al., 2016; Svegliato et al., 2021). In particular, some works (Wu and Lin, 2018; Svegliato et al., 2021) involve the utilization of the manual design of ethics-oriented reward functions, a concept denoted as _ethics shaping_ . Conversely, in other works (Berreby et al., 2017; Murtarelli et al., 2021), the segregation of ethical decision-making from the reward function is pursued. - **Game theory-based methods** . To address multi-agent challenges, researchers have developed machine ethics methods based on game theory and computational social choice. Championed by Pereira et al. (2016a), methodologies of existing work can be broadly partitioned into Evolutionary Game Theory (EGT) (Pereira et al., 53 2016b), classical game theory (Conitzer et al., 2017), and computational social choice (Rossi et al., 2011; Noothigattu et al., 2018). **Game Theory for Cooperative AI** _Cooperative AI_ (Dafoe et al., 2020, 2021) aims to address uncooperative and collectively harmful behaviors from AI systems (see §1.1.2). Here we introduce the branch of cooperative AI that focuses on game theory to complement the introduction to MARL-based cooperative training in §3.3.2. This branch tends to study the _incentives_ of cooperation and try to enhance them, in contrast to the MARL’s tendency to emphasize the _capabilities_ of coordination. Examples of incentive failures include game theory dilemmas like the prisoner’s dilemma (Phelps and Russell, 2023) and tragedy of the commons (Perolat et al., 2017), while examples of coordination capability failures include bad coordination of a robot football team (Ma et al., 2022). - **Classical Game Theory for Cooperative AI.** Recent work on Stackelberg games — a theoretical model for _commitment_ in games — include the introduction of bounded rationality into the model (Pita et al., 2010), dynamic models (Li and Sethi, 2017), machine learning of Stackelberg equilibria (Fiez et al., 2020), and more. Apart from Stackelberg games, _mixed-motive games_ have also received extensive study (Dafoe et al., 2020; McKee et al., 2020; Oesterheld and Conitzer, 2022). - **Evolutionary Game Theory for Cooperative AI** . Another avenue of research, initiated by Sachs et al. (2004), aims to understand how cooperation emerges from evolution – this includes human cooperation, which arose from Darwinian evolution, as well as the cooperation tendencies in AI systems that could emerge within other evolutionary settings such as the replicator dynamics (Schuster and Sigmund, 1983; Weibull, 1997). **4.3.2** **Evaluation Methods** In this section, we assume that we have already obtained the appropriate value that should be aligned. However, even so, under the guidance of Goodhart’s Law (Goodhart and Goodhart, 1984), we cannot simply define complex human values as reward functions, which also brings greater challenges to value alignment. We introduce specific human value alignment techniques in three parts: _Building Moral Dataset_, _Scenario Simulation_ . **Building Moral Dataset** _Moral Alignment_ refers to the adherence of AI systems to human-compatible moral standards and ethical guidelines while executing tasks or assisting in human decision-making (Min et al., 2023). Early attempts at moral value alignment, initiated in 2018 (Awad et al., 2018), have confirmed that the definition and evaluation of moral values themselves is a challenging issue. This has led to the emergence of abstract moral standards (Hagendorff, 2022) and various different standards driven by the average values of diverse community groups (Awad et al., 2018), fueling further in-depth research into moral value assurance. Assurance of moral values is typically achieved by constructing corresponding datasets. The Rule-of-Thumb (RoT) serves as a gauge for determining what actions are considered acceptable in human society. Building on this concept, Emelin et al. (2021); Forbes et al. (2020); Ziems et al. (2022) introduced the Moral Stories, SOCIALCHEM-101, and Moral Integrity Corpus datasets respectively, focusing on providing human social and moral norms. Hendrycks et al. (2020) and Jin et al. (2022a) introduced the ETHICS and MoralExceptQA datasets respectively, highlighting the inability of contemporary models to align ethically with human values. Abdulhai et al. (2022) found that models exhibit certain morals and values more frequently than others, revealing how the moral foundations demonstrated by these models relate to human moral foundations. Pan et al. (2023b) explored the trade-off between rewards and moral behavior, discovering a certain tension between the two. **Scenario Simulation** _Scenario simulation_ is a more complex form than datasets and therefore is considered by some views to be more effective in replicating real situations and harvesting better results. The form of the scenario can also vary. Pan et al. (2023a) built a series of diverse, morally salient scenarios through text adventure games, evaluating complex behaviors such as deception, manipulation, and betrayal. On the other hand, some work attempts to make intelligent agents learn human values through simulating human-machine interaction. Yuan et al. (2022) proposed a method for bidirectional value alignment between humans and machines, enabling machines to learn human preferences and implicit objectives through human feedback. Liu et al. (2024a) placed AI within a simulated human society sandbox, allowing AI to learn human societal value inclinations by mimicking humansocial interactions. **5** **Governance** Besides technical solutions, governance, the creation and enforcement of rules, is necessary to ensure the safe development and deployment of AI systems. In this section, we survey the literature on AI governance by exploring the role of AI governance, the functions, and relationships between stakeholders in governing AI, and several open challenges to effective AI governance. 54 5.1 The Role of AI Governance Predict Ability Create Techniques **Industry &** Draft Laws and Documents Establish RMS Assist Policy Making Facilitate Cooperation **Third Parties** Figure 13: Our framework for analyzing AI governance at present. The proposed framework explains the nonexhaustive interrelationships and functions among three primary entities in AI governance: the government, industry and AGI labs, and third parties. The government’s governance role encompasses regulating the industry and AGI labs and directing the trajectory of future AI development through policy documents. It also devises a _Risk Man-_ _agement System_ (RMS) (Mannes, 2020) to abate AI-related threats. Industry and AGI labs return by offering watchful predictions into AI development and innovating new technological methodologies to support regulatory measures (such as model evaluation (Shevlane et al., 2023)). Third parties fulfill a dual function, offering expert advice for robust governmental policy development and fostering collaborations among governments. In the context of industry and AGI labs, these third parties assist in equilibrating corporate interests to prevent disorganized competition from information asymmetry. They also deliver auditing services to the industry and AGI labs as independent entities. **5.1** **The Role of AI Governance** To explore the role of AI governance, we must identify the challenges that require governance. A range of social and ethical issues can and have already emerged from the adoption and integration of AI into various sectors of our society (AI Safety Summit, 2023). For instance, AI applications can inadvertently perpetuate societal biases, resulting in racial and gender discrimination (Caliskan et al., 2017; Perez et al., 2023). Moreover, unchecked reliance on these systems can lead to repercussions such as labor displacement (Acemoglu and Restrepo, 2018), widening socioeconomic disparities, and the creation of monopolistic environments. AI systems have exhibited the potential to jeopardize global security (Turchin and Denkenberger, 2020). For example, OpenAI’s system card for GPT-4 (OpenAI, 2023a) finds that an early version of the GPT-4 model as well as a version fine-tuned for increased helpfulness and harmlessness exhibits capabilities to enable disinformation, influence operations, and engineer new biochemical substances, among other risky behavior. Urbina et al. (2022) further demonstrated the potential of AI systems to enable the misuse of synthetic biology by inverting their drug discovery model to produce 40,000 toxic molecules. The horizon also holds the prospect of increasingly agentic and general-purpose AI systems that, without sufficient safeguards, could pose catastrophic or even existential risks to humanity (McLean et al., 2023). For example, OpenAI’s Weng (2023b) argued that models such as LLM could essentially act as the brain of an intelligent agent, enhanced by planning, reflection, memory, and tool use. Projects such as AutoGPT, GPT-Engineer, and BabyAGI epitomize this evolution. These systems can autonomously break down intricate tasks into subtasks and make decisions without human intervention. Microsoft research suggests that GPT-4, for instance, hints at the early inklings of AGI (Bubeck et al., 2023). As these systems evolve, they might lead to broad socio-economic impacts such as unemployment, and potentially equip malicious actors with tools for harmful activities. The major objective of AI governance is to mitigate this diverse array of risks. In pursuit of this goal, relevant actors should maintain a balanced portfolio of efforts, giving each risk category its due consideration. 55 5.2 The Multi-Stakeholder Approach **5.2** **The Multi-Stakeholder Approach** We put forward a framework to analyze the functions and relationships between stakeholders in AI governance (see Figure 13). In this framework, we outline three main entities. **Government Agencies** oversee AI policies using legislative, judicial, and enforcement powers, as well as engage in international cooperation. **Industry** **and AGI Labs** research and deploy AI technologies, making them subjects of the governance framework, while proposing techniques to govern themselves and affecting governance policy. **Third Parties**, including academia, Non-Governmental Organizations (NGOs), and Non-Profit Organizations (NPOs), perform not only auditing on corporate governance, AI systems, and their applications but also assist the government in policy-making. Proposals have been made about specific principles for a multi-stakeholder AI governance landscape. Notably, Brundage et al. (2020) argues to implement institutions, software, and hardware to make claims about the safety of AI systems more verifiable. **Government** According to Anderljung et al. (2023), three building blocks for government regulation are needed: (1) standard development processes to determine appropriate requirements for cutting-edge AI developers, (2) registration and reporting requirements to offer regulators insight into the progress of advanced AI development processes, (3) mechanisms to guarantee adherence to safety standards in the development and deployment of cutting-edge AI models. At present, an emerging collection of governmental regulations and laws is surfacing on a global scale, including the _European Union’s AI Act_ (European Parliament, 2023), and the _Bipartisan Framework for U.S. AI Act_ (Blumenthal and Hawley, 2023). Such regulations are indispensable for the safety and alignment of AI systems. **Industry and AGI Labs** Governance efforts in industry and AGI labs should emphasize comprehensive AI risk assessments throughout the lifecycle of the AI system. Based on discussions in Koessler and Schuett (2023); Schuett et al. (2023), the full cycle of AI risk assessment can be seen as consisting of five stages. **Pre-development** **risk assessments**, **pre-training risk assessments**, and **pre-deployment risk assessments** all include predictions and analyses of impact and risks with a variety of tools, but with increasing amounts of detail, clarity, and sophistication (Koessler and Schuett, 2023). **Post-deployment monitoring** is the phase where mechanisms for monitoring are established, and all previous analyses are continually updated post-deployment (Koessler and Schuett, 2023). **External scrutiny** includes bug bounty programs (Schuett et al., 2023), external red teaming and third-party model auditing (Schuett et al., 2023; Anderljung et al., 2023) Taking security measures against the risks associated with AI systems seems to be widely accepted by AI companies and related practitioners. Schuett et al. (2023) shows that 98% of respondents who have been surveyed somewhat or strongly approved that AGI labs should perform pre-deployment risk assessments, hazardous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming to guarantee AI safety. Meanwhile, leading AI companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, have voluntarily committed to the government to implement security measures (The White House, 2023). Notably, a lot of researchers have proposed pausing the development of advanced AI systems to win more time for safety research, risk assessments, and regulatory preparations (Bengio et al., 2023). Their proposals include blanket pausing of all sufficiently advanced systems (Bengio et al., 2023), and also conditional pausing of specific classes of models in response to evaluation results on specific failure modes (Alaga and Schuett, 2023), including the currently adopted practice of _responsible scaling policy_ (RSP) (Anthropic, 2023a). **Third Parties** Mökander et al. (2023) presents three key functions of third-party auditing: (1) _Governance audits_ (of tech providers that design and disseminate LLMs) (2) _Model audits_ (of LLMs after pre-training but prior to their release) (3) _Application audits_ (of applications based on LLMs). One prominent example of existing third-party audits is that of METR, initially a project of Alignment Research Center (ARC Evals, 2023; Kinniment et al., 2023), who collaborated with OpenAI to perform red teaming on GPT-4 (OpenAI, 2023a) and partnered with Anthropic to perform red teaming on Claude 2 (Anthropic, 2023c). These efforts include evaluations on toxicity and bias, as well as frontier AI risks such as autonomous replication, manipulation, cybersecurity, and biological weapon risks (OpenAI, 2023a; Shevlane et al., 2023). Apart from auditing, third parties can support AI governance in other ways, such as assisting policy-making and facilitating cooperation internationally (Ho et al., 2023). For example, Maas (2021) thinks that the government should prefer technology-neutral rules rather than technology-specific rules. _AI4People’s Ethical Framework for_ _a Good AI Society: Opportunities, Risks, Principles, and Recommendations_ (Floridi et al., 2021), released by AI4People, was guided to the Ethics Guidelines for Trustworthy Artificial Intelligence presented in April 2019 (Atomium-EISMD, 2023). The World Economic Forum (WEF) convenes government officials, cooperations, and civil society and it has initiated a Global AI Action Alliance in collaboration with partner organizations, with the goal of promoting international cooperation in the field of AI (Kerry et al., 2021). 56 5.3 Open Problems **5.3** **Open Problems** There are numerous open problems in the existing field of AI governance. These problems often have no clear answers, and discussion of these questions can often promote better governance. For effective AI governance, we mainly discuss international governance and open-source governance, hoping to promote the safe development of AI through our discussion. **5.3.1** **International Governance** Amidst the swift progress and widespread implementation of AI technology universally, the need for international governance of AI is high on the agenda (Summit, 2023). Critical discussions revolve around the necessity to institute a global framework for AI governance, the means to ensure its normativity and legitimacy (Erman and Furendal, 2022), among other significant concerns. These themes draw an intensifying level of detail and complexity in their consideration. Also, as stated by United Nations secretary-general António Guterres during a Security Council assembly in July, generative AI possesses vast potential for both positive and negative impacts at scale, and failing to take action to mitigate the AI risks would be a grave neglect of our duty to safeguard the well-being of current and future generations (Guterres, 2023), international governance also has intergenerational influence. Hence, we examine the significance and viability of international AI governance from three aspects within this section: _manage global catastrophic AI risks_, _manage opportunities in AI_, and _historical and present efforts_, with both generational and intergenerational perspectives. We aim to contribute innovative thoughts for the prospective structure of international AI governance. **Manage Global Catastrophic AI Risks** The continual advancements in AI technology promise immense potential for global development and prosperity (Vinuesa et al., 2020). However, they inevitably harbor underlying risks. The unchecked competition in the market and geopolitical factors could precipitate the untimely development and deployment of advanced AI systems, resulting in negative global externalities (Tallberg et al., 2023). The amplification of existing inequalities such as racial and gender bias (Swaugerarchive, 2020) ingrained in AI systems may result in intergenerational ethical discrimination. Since these risks are international and intergenerational, it seems that international governance interventions could alleviate these catastrophic global AI challenges. For example, a consensus amongst nations could help defuse potential AI arms races, while an industry-wide agreement could avert the hasty and irresponsible development of sophisticated AI systems, thus securing the long-term and sustainable development of AI (Ho et al., 2023). **Manage Opportunities in AI** The opportunities created by AI development are not distributed equally, which may cause enduring digital inequality between regions and harm the sustainability of AI development. Geographic variances in AI progression suggest an inequitable distribution of its economic and societal benefits, potentially excluding developing nations or specific groups from these advantages (Ho et al., 2023; Tallberg et al., 2023). Moreover, the consolidation of decision-making authority within the technology sector among a limited number of individuals (Sara Stratton, 2021; Noble et al., 2021) could cause an intergenerational impact. Such inequality in the distribution of interests can be mitigated through international governance. Effective international consensus and coordination on the allocation of AI opportunities, which is facilitated by its propagation, education, and infrastructural development (Opp, 2023), could ensure a balanced distribution of benefits derived from AI and promote sustainability in its ongoing development. **Historical and Present Efforts** Before the surge of AI technology, the international community had laid down frameworks in line with cooperative regulation of influential technologies and critical matters. For example, the Intergovernmental Panel on Climate Change (IPCC) convened specialists to assess climactic environmental issues, fostering scientific consensus (Ho et al., 2023). The International Civil Aviation Organization (ICAO) standardized and oversaw international regulations, simultaneously assessing the member nations’ compliance with these laws (Ho et al., 2023). The International Atomic Energy Agency (IAEA) propelled the harmonious utilization of nuclear energy, with its global reach and sophisticated monitoring and evaluation mechanisms. Fast forward to the present-day scenario, wherein multiple international organizations have arrived at a consensus on AI governance. In 2019, the G20 members consolidated a ministerial declaration focusing on human-centered artificial intelligence principles (G20, 2019). Concurrently, the Organisation for Economic Cooperation and Development (OECD) set forth the _OECD Principles on Artificial Intelligence_ (OECD, 2019). The IEEE Standards Association launched a worldwide initiative aimed at _Securing that all stakeholders involved in the design and implementation_ _of autonomous and intelligent systems receive proper education, training, and motivation to emphasize ethical_ _concerns, thereby advancing these technologies for the betterment of humanity._ (Chatila and Havens, 2019). In 2021, the United Nations Educational, Scientific and Cultural Organization(UNESCO) produced the first-ever global standard on AI ethics (UNESCO, 2021), which aims to lay the foundations for making AI systems work for the good of humanity and societies, and to prevent potential harm caused by losing control over AI systems. In 2023, the AI Safety Summit was convened in London, United Kingdom. Countries held roundtable discussions 57 5.3 Open Problems on the risks and opportunities of AI and jointly issued the Bletchley Declaration (Summit, 2023). The scholarly community has also proposed prospective international governance frameworks for AI, such as the International AI Organization (IAIO) (Trager et al., 2023). We hope these precedents and research outcomes will inspire and provide the groundwork for developing a resilient and long-lasting international framework for AI governance in the future. **5.3.2** **Open-Source Governance** The debate over the open-sourcing of contemporary AI models is contentious in AI governance, particularly as these models gain increased potency (Seger et al., 2023). The potential security hazards linked with making these models open-source continue to be the crux of debates among AI researchers and policymakers. The offencedefence balance in open-source AI governance also remains controversial. There is still debate over whether open-source models will increase model security or increase the risk of abuse. As referenced in Shapiro and Siegel (2010), the efficacy of disclosure depends on the chance of potential attackers already possessing the knowledge, coupled with the government’s capacity to convert transparency into the identification and solution of emerging vulnerabilities. Some scholars have already conducted preliminary discussions on the offense-defense balance in the AI field, such as Weng (2023a)’s discussion of adversarial attacks. If a suitable equilibrium between offence and defence cannot be forged for AI systems, the open-sourcing could potentially give rise to significant risks of AI system misuse. For precision and clarity, we adhere to the definition of open-source models stated by Seger et al. (2023): enabling open and public access to the model’s architecture and weights, allowing for modification, study, further development, and utilization by anyone. Currently, the most recognized open-source models include Llama2, Falcon, Vicuna, and others. In this section, we evaluate the security advantages and potential threats posed by opensource models, fostering a discourse on the feasibility of open-sourcing these models. Ultimately, our objective is to amalgamate insights from existing studies to put forward suggestions for future open-source methodologies that will ascertain the secure implementation of these models. **Arguments for Open-sourcing** The view that supports the open-sourcing of existing models suggests that this method can mitigate the security risks inherent in these models in several ways: - **Potentially Bolster Model’s Security** . Meta’s assertions in their release blog for Llama2 (Meta, 2023) promote the belief that this enables the developer and the technical community to conduct tests on the models. As a result, this rapid identification and resolution of issues can considerably strengthen model security. In contrast, another perspective suggests that open-sourcing existing models could enhance the recognition of associated risks, thereby facilitating a greater focus on, investigation into, and mitigation of these potential hazards (Zellers, 2019). - **Foster the Decentralization of Power and Control** . Open-sourcing has been widely recognized as an effective strategy in reducing the dominance of major AI laboratories across various sectors, including economic, social, and political domains (Seger et al., 2023). An example is articulated in the core reasons for Stability’s open-sourcing of Stable Diffusion: They place their trust in individuals and the community, as opposed to having a centralized, unelected entity controlling AI technology (Mostaque, 2022). Furthermore, certain commentators draw an analogy between open-sourcing and the Enlightenment Era, asserting that decentralized control reinforces faith in the power and good of humanity and society (Howard, 2023), implementing central regulations for safety purposes might amplify the power of the AI technology community instead. **Arguments against Open-sourcing** Critics of open-source models assess the potential for misuse from the following viewpoints: - **Potentially Be Fine-Tuned into Detrimental Instances** . Current research rigorously affirms that AI systems, contradictory to their initial design intent for mitigating toxicities in chemistry or biology, now hold the potential to manufacture new chemical toxins (Urbina et al., 2022) and biological weaponry (Sandbrink, 2023). The malicious fine-tuning of such models could lead to profound security risk manifestations. Besides, language models, once fine-tuned, could emulate skilled writers and produce convincing disinformation, which may generate considerable sociopolitical risks (Goldstein et al., 2023). - **Inadvertently Encourage System Jailbreaks** . Research indicates that unfettered access to open-sourced model weights could facilitate bypassing system security measures (Seger et al., 2023). This premise was epitomized by Zou et al. (2023b), who showcased this potentiality by developing attack suffixes using Vicuna7B and 13B. Once implemented within readily accessible interfaces such as ChatGPT, Bard, and Claude, these provoked unwanted generations. Therefore, open-sourcing a model might unintentionally undermine the safeguarding protocols of models that are not open-sourced, consequently amplifying the likelihood of model misuse. 58 5.4 Rethinking AI Alignment from a Socio-technical Perspective **Tentative Conclusions on Open-Source Governance** The debate on the open-sourcing of AI models remains unsettled, with a prevailing viewpoint that the disclosure of AI models does not pose significant risks at present. Our discourse not only synthesizes existing perspectives on this topic but also prepares the ground for future deliberations considering the prudence of open-sourcing more advanced AI systems. Existing guidelines for open-sourcing advanced AI systems include measures such as evaluating risks by quantifying the potential for misuse via fine-tuning and a gradual model release (Solaiman et al., 2019; Seger et al., 2023). Meanwhile, policymakers are establishing rigorous compliance protocols for these open-source models. For example, European policymakers insist that the models should have “performance, predictability, interpretability, corrigibility, security, and cybersecurity throughout [their] lifecycle.” (Chavez, 2023). **5.4** **Rethinking AI Alignment from a Socio-technical Perspective** In the preceding discussion, our primary focus is on AI systems as the core of AI Alignment. We examine strategies to align the system with human intentions and values throughout its lifecycle, considering both forward and backward alignment. In the future, AI will address more challenging and high-stakes decisions, _e.g._, “How to allocate resource for fairness?” and “Which drugs are safe to approve?”. These decisions will require not only significant expertise for well-informed answers but also involve value judgments, leading to strong disagreements among informed individuals based on differing values. Furthermore, AI systems may transmit incorrect values, sway public opinion, facilitate cultural invasion, and exacerbate social division (Goldstein et al., 2023). Singapore Conference on AI (SCAI) once introduced 12 questions that are meant to be a holistic formulation of the challenges that should be addressed by the global AI community to allow humanity to flourish [40] . In the area of alignment we are more concerned about the following question: as AI systems evolve into socio-technical entities, how can alignment techniques mit igate the challenges they pose to human society? Specifically, we explore the incorporation of values into AI systems through alignment techniques and provide insights into security methods. We also aim to identify the alignment techniques needed to address the socio-technical challenges posed by future AI systems. **5.4.1** **Incorporating Values into AI Systems** Aligning AI systems with human morals and societal values is a key objective of alignment technology. However, current technologies ( _e.g._, RLHF) primarily blend preferences without distinguishing specific values, focusing solely on human preferences. Human preferences effectively address the basic alignment issue: ensuring models align with human intentions and safety, but not morals and societal values. However, minor errors in future AI systems’ critical problems can lead to disagreements among people with differing viewpoints. Truly understanding human values is crucial for AI systems to generalize and adapt across various scenarios and ideologies. Incorporating values into AI systems generally involves two aspects: aligning with individual values (§4.3), and aligning with collective values. In this part, we mainly discuss the second topic. The main challenge of collective value alignment lies in determining which groups to include. A prevalent approach is defining universal values like fairness, justice, and altruism, exemplified by the veil of ignorance. However, this work remains theoretical; another approach avoids defining universal values, instead seeking the broadest overlap of values across cultures. Bakker et al. (2022) initiated this approach by gathering preferences from various demographics, training a language model, and aggregating results using diverse social welfare functions. Similarly, simulated deliberative democracy has been proposed to enhance decision-making (Leike, 2022). Specifically, individuals from diverse demographics reach consensus on value-laden topics with AI assistance. This data informs new model training, enabling simulation of deliberative democracy for more apt responses to new value-laden issues. Furthermore, instead of providing a consensus answer to all users, collective value alignment should encourage AI systems to tailor responses to specific demographic groups. In other words, what values should guide the model’s responses to specific questions or in certain dialogues? Democratic Fine-Tuning (MAI, 2023) uses a value card and moral graph to link various values, allowing fine-tuned LLMs to reflect on their moral context before responding. However, while most value discussions assume static values, social values are actually dynamic and evolving. Exploring how value-aligned AI systems can dynamically adapt to changing environmental values is crucial. Future technologies need to address static value alignment first, including strategies for sampling human groups for alignment. Bakker et al. (2022) founds that consensus statements built silently from a subgroup will lead to dissent among excluded members, highlighting the consensus’s sensitivity to individual input. For international cooperation, establishing a shared data center is necessary but also requires first determining which civilizations to include and if their values can align. [40https://www.scai.gov.sg/](https://www.scai.gov.sg/) 59 **5.4.2** **Alignment Techniques for AI Governance** It’s crucial to ensure the reliability and trustworthiness of AI systems as they are adopted in various real-world decision-making scenarios. On one hand, language models still exhibit illusions during use, and on the other hand, the reliability of systems comprises two parts: the system’s reliability under individual testing environments and its reliability in human interactions. Another issue is constructing systems with decision-making processes that are observable and explainable to users. From a social perspective, the proliferation of AI systems across fields also poses potential risks. This risk arises from a gap between AI developers, who often focus on advancing technology without considering its downstream applications, and AI adopters, who may transfer AI systems to their fields without adequate safety considerations or verification of replicable success [41] . Therefore, it is crucial to build a framework that enables AI adopters to accurately assess model utility and appropriateness, and allows AI regulators to quickly identify risks and issue safety alerts in AI systems. Alignment techniques can facilitate synchronized, independent, and rigorous evaluations of AI systems. AI developers should prioritize appropriate bias handling during the training process, acknowledging the importance of socio-economic, cultural, and other differences. Furthermore, we should aim to develop robust and fair evaluation methods and datasets for auditing AI systems. Zhu et al. (2023) proposes the first dynamic testing protocol for large language models, utilizing Directed Acyclic Graphs (DAGs) to dynamically generate test data, thereby reducing the risk of test data memorization and contamination. Additionally, new robust security protocol evaluation methods have been introduced: Shlegeris and Greenblatt (2023) suggests constructing adversarial policies to manage dangerously powerful and deceptive models, while Greenblatt et al. (2023) proposes (un)trusted editing to supervise models based on their harm and deceitfulness levels. Future efforts should also prevent AI systems from reward-hacking evaluation system exploits and aim to provide AI regulators with an explainable, independent, and centralized evaluation system. AI adopters and the industry should allocate financial and computational resources to thoroughly evaluate use cases and share case studies showcasing both successes and failures. Equally important is training for adopters on downstream applications. **6** **Conclusion** In this survey, we have provided a broadly-scoped introduction to AI alignment, which aims to build AI systems that behave in line with human intentions and values. We specify the objectives of alignment as Robustness, Interpretability, Controllability, and Ethicality ( **RICE** ), and characterize the scope of alignment methods as comprising of _forward alignment_ (making AI systems aligned via alignment training) and _backward alignment_ (gaining evidence of the systems’ alignment and govern them appropriately to avoid exacerbating misalignment risks). Currently, the two notable areas of research within forward alignment are _learning from feedback_ and _learning under_ _distribution shift_, while backward alignment is comprised of _assurance_ and _governance_ . One thing that sets alignment apart from many other fields is its diversity (Hendrycks, 2022) – it is a tight assembly of multiple research directions and methods, tied together by a shared goal, as opposed to a shared methodology. This diversity brings benefits. It fosters innovation by having the different directions compete and clash against each other, leading to a cross-pollination of ideas. It also allows different research directions to complement each other and together serve the goal of alignment; this is reflected in the _alignment cycle_ (see Figure 2), where the four pillars are integrated into a self-improving loop that continually improves the alignment of AI systems. Meanwhile, this diversity of research directions raises the barrier to entry into this field, which mandates the compilation of well-organized survey materials that serve both the newcomers and the experienced. In this survey, we attempt to address this need by providing a comprehensive and up-to-date overview of alignment. We attempt to account for the full diversity within the field by adopting a broad and inclusive characterization of alignment. Our survey of alignment gives a spotlight to almost all major research agendas in this field, as well as to real-world practices on the assurance and governance front. We recognize that boundaries of alignment are often vague and subject to debate. Therefore, when proposing the RICE principles, we put forth our broad characterization of alignment as an explicit choice. In the meantime, we recognize that such a survey needs to be a long-term endeavor that is continually reviewed and updated. Both the problems and methods of alignment closely follow the development of machine learning. This fast-paced development means that new materials and frameworks can become outdated after merely a few years. This fact is one reason why we write the survey to reflect the latest developments, and also mandates continual maintenance and updates. **6.1** **Key Challenges in the Alignment Cycle** Specifically, we outline key challenges and potential future directions based on the alignment cycle, namely forward and backward alignment. [41https://www.scai.gov.sg/scai-question-11/](https://www.scai.gov.sg/scai-question-11/) 60 6.1 Key Challenges in the Alignment Cycle **Learning Human Intent from Rich Modalities (forward alignment)** Underspecificity of true human intent, _i.e._, the non-uniqueness of inferred human intent from binary feedback data, is a key challenge in scalable oversight. Consider an AI system tasked with providing proof or refutation to a mathematical hypothesis, under a human evaluator who might be tricked by sophisticated false proofs. Our goal is to construct a training process that induces the AI system to output sound proofs as opposed to false proofs that seem convincing. This system may mislead evaluators with plausible but false proofs due to the system’s optimization for human approval, as it attempts to satisfy the superficial criteria of convincing proofs rather than focusing on accuracy. The fundamental problem stems from the reliance on binary feedback which categorizes responses simply as preferred or dispreferred, thus limiting the amount of information on true human preferences that’s available to the learning algorithm, potentially leading to the preference of credible-seeming deceptive proofs over genuinely sound arguments. To enhance the model’s alignment with true human intent, researchers have proposed incorporating richer human input beyond binary choices, such as detailed text feedback (Chen et al., 2024a) and real-time interations (HadfieldMenell et al., 2016). It allows the model to differentiate between proofs that are merely convincing and those that are truly sound, using nuanced human evaluations and a vast database of human-written texts. The broader input base helps in constructing a more accurate model of human preferences, reducing the risk of favoring misleading proofs while respecting the complexity of human intent and reasoning. Looking forward, even richer modalities like embodied societal interactions could represent an enticing next step. It is worth noting that current LLMs are already trained on Internet-scale human text (and for multimodal models, also visual/audio content). Why, then, don’t reward modeling algorithms already possess the ability to accurately pin down human intent? The explanation is that pretraining data does not feed into the reward modeling process in a way that biases the process towards true human intent, even though the reward model is finetuned from the pretrained model. For instance, neural circuits representing human intent can potentially be rewired during RLHF to perform manipulative behaviors. From another perspective, pretraining on text such as _humans do not want to_ _be tricked into believing things_ does not induce the reward model to interpret later human feedback signals in this light, partly due to the lack of out-of-context learning capabilities in current LLMs (Berglund et al., 2023). Solving these problems may enable reward modeling algorithms to learn human intent from massive pretraining data, a big step towards our goal. We summarize three key questions for the learning of human intent from rich modalities. They serve as key dimensions for characterizing an alignment method from the intent modality lens, and almost all existing alignment methods can be categorized by their answers to these three questions. 1. **Learning algorithm** . As previously mentioned, we need to learn human intent from rich modalities in a way that guides the reward model’s subsequent interpretation of human input. 2. **Priors and inductive biases** . Human-like priors/inductive bias is needed for the reward modeling process to select the correct hypothesis of human intent, though this requirement is greatly loosened as the allowed modalities of human input expand. 3. **Learner alignment** . We utilize the intent learner to align AI systems, possibly by using it as a reward model. However, this would not be possible if the intent learner, which is itself an AI system with potentially strong capabilities, is misaligned. This necessitates measures to avoid or contain the misalignment of the intent learner. **Trustworthy Tools for Assurance (backward alignment)** A major concern in AI alignment is deceptive alignment, where AI systems pursue aligned goals under most circumstances but may pursue other goals when opportunities arise. Recent studies have revealed that general alignment techniques ( _e.g._, SFT, RLHF, Adversarial Training) fail to eradicate certain deceptive and backdoor behaviors, possibly leading to a misleading sense of safety (Hubinger et al., 2024). With AI systems gaining power and access to more resources, hidden intentions that pose existential risks could have unimaginable consequences. _How can we detect and eliminate deceptive and_ _backdoor behaviors?_ Reliable tools are still lacking to address this issue. On one hand, mechanistic interpretability tools encounter additional challenges due to the polysemanticity of neurons and scalability issues. On the other hand, there is a limited understanding of how jailbreaking functions and the susceptibility of language models to poisoning and backdoors (Anwar et al., 2024). Additionally, given the potential misuse of AI systems in cyber attacks, biological warfare, and misinformation, it is crucial to develop reliable mechanisms to trace the origins of LLM outputs. While AI systems are becoming more integrated into society, societal readiness lags behind. This is evident from the inadequate AI governance efforts, insufficient public knowledge, governments’ lack of necessary scientific and technical capabilities, the absence of institutions that can keep pace with LLM advancements, and the challenges in mitigating the social 61 6.2 Key Traits and Future Directions in Alignment Research impacts of widespread harmful behaviors. Therefore, it is essential to reconsider AI alignment from a sociotechnical standpoint, establish dependable AI assurance and governance mechanisms, and engage in effective international governance collaboration. **Value Elicitation and Value Implementation (backward alignment)** Current algorithms for learning from human feedback, particularly RLHF, often assume feedback comes from a singular, monolithic human source. However, this assumption is unrealistic due to widespread disagreements on contentious issues globally, which frequently result in conflicting judgments about AI system outputs (Santurkar et al., 2023). Consequently, determining who to draw feedback from and understanding the scope and nature of human values infused into models are crucial questions for the field of alignment. _Value Elicitation_ and _Value Implementation_ aim to define the values and norms that AI systems should encode and how to integrate these into AI systems. Human values and preferences are diverse, ranging from strict rules like laws and moral principles to social etiquette and specific domain preferences (Cahyawijaya et al., 2024; Kirk et al., 2024). We need reliable tools to reveal the values embedded in current AI systems and potential social risks, enabling us to mitigate these risks more effectively [42] . _Democratic human input_ is one of the leading solutions to value elicitation and implementation. This method gathers input from a large, demographically representative sample of individuals, aggregating preferences and values into a coherent policy, rather than relying on feedback from a single individual. This approach is heavily influenced by the computational social choice literature (Brandt et al., 2016). Leading industry (Zaremb et al., 2023) and academic (Köpf et al., 2024) labs have adopted democratic human input for LLMs. However, research is still needed on its integration into more agentic AI systems, such as LLM-based autonomous agents. Despite its apparent simplicity, democratic human input encounters significant practical and fundamental challenges. Obtaining a truly random sample of the global population is particularly challenging, as 33% of people worldwide do not have Internet access and thus are excluded from participating in AI system training (United Nations, ITU, 2023). Furthermore, human feedback becomes less effective when the AI system’s reasoning capabilities surpass those of humans, making it difficult for human workers to evaluate its outputs. To complement democratic human input, alternative approaches aim to formalize universally recognized meta-level moral principles, such as _moral consistency_, _moral reflection_, and _moral progress_, designing algorithms to enact these principles. Although these methods still rely on human data and input, they do not demand strict representativeness and are less constrained by human oversight limitations. - **Moral consistency** . There is a general consensus that moral principles should be consistently applied, meaning similar cases should receive similar treatment irrespective of the people or parties involved. Algorithms have been developed to integrate this principle into the ethical decision-making processes of AI systems (Jin et al., 2022b). - **Moral reflection and moral progress** . The _coherent extrapolated volition_ concept was developed to formalize the role of reflection in shaping human moral values (Søvik, 2022). Inspired by this, subsequent algorithms were designed to enable AI systems to mimic human moral reflection, thereby influencing their actions (Xie et al., 2023). Furthermore, the logical next step of moral reflection is _moral progress_, demonstrated by AI-driven analyses of historical moral trends (Schramowski et al., 2020; Atif et al., 2022) and efforts to permanently integrate continual moral advancement into AI systems (Kenward and Sinclair, 2021). **6.2** **Key Traits and Future Directions in Alignment Research** In the end of the survey, we conclude the survey by looking ahead and presenting the key traits in this field that we believe ought to be preserved or fostered. **Open-Ended Exploration of Novel Challenges and Approaches** A lot of the alignment discourse is built upon classic works that predate the recent developments of LLMs and other breakthroughs in large-scale deep learning. Thus, when this paradigm shift happens in the machine learning field, it is plausible that some challenges in alignment become less salient while others become more so; after all, one defining feature of scientific theories is their falsifiability (Popper, 2005). More importantly, this shift in machine learning methodology and the broader trend of ever-tighter integration of AI systems into society (Abbass, 2019) introduces novel challenges that could not be envisioned before. This requires that we engage in _open-ended exploration_, actively seeking out new challenges that were previously neglected. Moreover, such an exploration need not be constrained to challenges – a similar mindset should be adopted regarding approaches and solutions, thus building a more diverse portfolio for both the _questions_ and the _answers_ (Shimi, 2022). [42https://www.scai.gov.sg](https://www.scai.gov.sg) 62 6.2 Key Traits and Future Directions in Alignment Research **Combining Forward-Looking and Present-Oriented Perspectives** Alignment has emphasized harms from potential advanced AI systems that possess stronger capabilities than current systems (Ngo, 2020a). These systems might come into existence well into the future, or might just be a few years away (Stein-Perlman et al., 2022). The former possibility requires us to look into extrapolated trends and hypothetical scenarios (Carlsmith, 2022). In contrast, the latter possibility highlights the need for on-the-ground efforts that work with current governance institutions and use current systems as a prototype for more advanced ones (Cotra, 2021). **Emphasis on Policy Relevance** Alignment research does not live in a vacuum but in an ecosystem [43], with participation from researchers, industry actors, governments, and non-governmental organizations. Research serving the needs of the AI alignment and safety ecosystem would therefore be useful. Such needs include solving the key barriers to various governance schemes, for example, extreme risk evaluations (Shevlane et al., 2023), infrastructure for computing governance, and mechanisms for making verifiable claims about AI systems (Brundage et al., 2020). **Emphasis on Social Complexities and Moral Values** As AI systems become increasingly integrated into society (Abbass, 2019), alignment ceases to be only a single-agent problem and becomes a social problem. Here, the meaning of _social_ is three-fold. 1. Alignment research in multi-agent settings featuring the interactions between multiple AI systems and multiple humans (Critch and Krueger, 2020; Liu et al., 2024a). This includes how AI systems can receive granular feedback from realistic simulated societies, ensuring consistency in training scenarios and among multiple entities ( _i.e._, multi-agent, multiple AI systems, and multiple humans), which not only aids in generalizing AI systems in multi-entity settings but also helps avoid problems associated with RL (Liu et al., 2024a). 2. Incorporating human moral and social values into alignment (see §1.2.3 and §4.3), which is closely linked to the field of _machine ethics_ and _value alignment_ (Gabriel, 2020; Gabriel and Ghazavi, 2021). 3. Modeling and predicting the impacts of AI systems on society, which requires methods to approach the complexities of the social system, including those from the social sciences. Examples of potentially useful methodologies include social simulation (Bonabeau, 2002; De Marchi and Page, 2014; Park et al., 2023a) and game theory (Du et al., 2023). **Acknowledgments** We thank David Krueger, Anca Dragan, Alan Chan, Stephen Casper, Haoxing Du, Lawrence Chan, Johannes Treutlein, and YingShan Lei for their helpful and constructive feedback on the manuscript. We thank Yi Qu for the graphical design and refinement of the figures in our survey. 43See aisafety.world for a map of the organizational landscape of alignment. 63 References **References** [1] Hussein A Abbass. 2019. Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. _Cognitive Computation_, 11(2):159–171. [2] Pieter Abbeel and Andrew Y Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In _Pro-_ _ceedings of the twenty-first international conference on Machine learning_, page 1. [3] Marwa Abdulhai, Clément Crepy, Daria Valter, John Canny, and Natasha Jaques. 2022. Moral foundations of large language models. In _AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric_ _AI_ . [4] David Abel, James MacGlashan, and Michael L Littman. 2016. Reinforcement learning as a framework for ethical decision making. In _AAAI Workshop: AI, Ethics, and Society_, volume 16, page 02. Phoenix, AZ. [5] Daron Acemoglu and Pascual Restrepo. 2018. Artificial intelligence, automation, and work. In _The economics_ _of artificial intelligence: An agenda_, pages 197–236. University of Chicago Press. [6] Stephen Adams, Tyler Cody, and Peter A Beling. 2022. A survey of inverse reinforcement learning. _Artificial_ _Intelligence Review_, 55(6):4307–4346. [7] Gediminas Adomavicius, Jesse Bockstedt, Shawn Curley, and Jingjing Zhang. 2022. Recommender systems, ground truth, and preference pollution. _AI Magazine_, 43(2):177–189. [8] M Mehdi Afsar, Trafford Crump, and Behrouz Far. 2022. Reinforcement learning based recommender systems: A survey. _ACM Computing Surveys (CSUR)_, 55(7):1–38. [9] Forest Agostinelli, Guillaume Hocquet, Sameer Singh, and Pierre Baldi. 2018. From reinforcement learning to deep reinforcement learning: An overview. In _Braverman Readings in Machine Learning. Key Ideas_ _from Inception to Current State: International Conference Commemorating the 40th Anniversary of Emmanuil_ _Braverman’s Decease, Boston, MA, USA, April 28-30, 2017, Invited Talks_, pages 298–328. Springer. [[10] AI Safety Summit. 2023. Ai safety summit 2023: Roundtable chairs’ summaries, 1 november. https:](https://www.gov.uk/government/publications/ai-safety-summit-1-november-roundtable-chairs-summaries/ai-safety-summit-2023-roundtable-chairs-summaries-1-november--2) [//www.gov.uk/government/publications/ai-safety-summit-1-november-roundta](https://www.gov.uk/government/publications/ai-safety-summit-1-november-roundtable-chairs-summaries/ai-safety-summit-2023-roundtable-chairs-summaries-1-november--2) [ble-chairs-summaries/ai-safety-summit-2023-roundtable-chairs-summaries-1](https://www.gov.uk/government/publications/ai-safety-summit-1-november-roundtable-chairs-summaries/ai-safety-summit-2023-roundtable-chairs-summaries-1-november--2) [-november--2.](https://www.gov.uk/government/publications/ai-safety-summit-1-november-roundtable-chairs-summaries/ai-safety-summit-2023-roundtable-chairs-summaries-1-november--2) [[11] Ajeya Cotra. 2021. why-ai-alignment-could-be-hard-with-modern-deep-learning. https://www.cold-t](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning) [akes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning.](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning) [12] Riad Akrour, Marc Schoenauer, and Michele Sebag. 2011. Preference-based policy learning. In _Machine_ _Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2011, Athens, Greece,_ _September 5-9, 2011. Proceedings, Part I 11_, pages 12–27. Springer. [13] Riad Akrour, Marc Schoenauer, and Michèle Sebag. 2012. April: Active preference learning-based reinforcement learning. In _Machine Learning and Knowledge Discovery in Databases: European Conference, ECML_ _PKDD 2012, Bristol, UK, September 24-28, 2012. Proceedings, Part II 23_, pages 116–131. Springer. [14] Jide Alaga and Jonas Schuett. 2023. Coordinated pausing: An evaluation-based coordination scheme for frontier ai developers. _arXiv preprint arXiv:2310.00374_ . [15] Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. [https://openreview.net/forum?id=ryF7rTqgl.](https://openreview.net/forum?id=ryF7rTqgl) [16] Stefano V Albrecht and Subramanian Ramamoorthy. 2013. A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems. In _Proceedings of the 2013 international conference_ _on Autonomous agents and multi-agent systems_, pages 1155–1156. [17] Gordon Willard Allport. 1955. _Becoming: Basic considerations for a psychology of personality_, volume 20. Yale University Press. [18] David Alvarez Melis and Tommi Jaakkola. 2018. Towards robust interpretability with self-explaining neural networks. _Advances in Neural Information Processing Systems_, 31. [19] Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. _AI Magazine_, 35(4):105–120. [[20] Dario Amodei, Paul Christiano, and Alex Ray. 2017. Learning from human preferences. https://open](https://openai.com/research/learning-from-human-preferences) [ai.com/research/learning-from-human-preferences.](https://openai.com/research/learning-from-human-preferences) 64 References [21] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete problems in ai safety. _arXiv preprint arXiv:1606.06565_ . [22] Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for deep neural networks. In _6th International Conference on Learning_ _Representations (ICLR)_, 1711.06104, pages 0–0. Arxiv-Computer Science. [23] Markus Anderljung, Joslyn Barnhart, Jade Leung, Anton Korinek, Cullen O’Keefe, Jess Whittlestone, Shahar Avin, Miles Brundage, Justin Bullock, Duncan Cass-Beggs, et al. 2023. Frontier ai regulation: Managing emerging risks to public safety. _arXiv preprint arXiv:2307.03718_ . [24] Michael Anderson, Susan Anderson, and Chris Armen. 2005. Towards machine ethics: Implementing two action-based ethical theories. In _Proceedings of the AAAI 2005 fall symposium on machine ethics_, pages 1–7. [25] Michael Anderson and Susan Leigh Anderson. 2007. The status of machine ethics: a report from the aaai symposium. _Minds and Machines_, 17:1–10. [26] Michael Anderson and Susan Leigh Anderson. 2011. _Machine ethics_ . Cambridge University Press. [27] Jacob Andreas. 2022. Language models as agent models. In _Findings of the Association for Computational_ _Linguistics: EMNLP 2022_, pages 5769–5779. [[28] Anthropic. 2022. Softmax linear units. https://transformer-circuits.pub/2022/solu/in](https://transformer-circuits.pub/2022/solu/index.html) [dex.html.](https://transformer-circuits.pub/2022/solu/index.html) [[29] Anthropic. 2023a. Anthropic’s responsible scaling policy. https://www.anthropic.com/index/an](https://www.anthropic.com/index/anthropics-responsible-scaling-policy) [thropics-responsible-scaling-policy.](https://www.anthropic.com/index/anthropics-responsible-scaling-policy) [[30] Anthropic. 2023b. Circuits updates - july 2023. https://transformer-circuits.pub/2023/j](https://transformer-circuits.pub/2023/july-update/index.html) [uly-update/index.html.](https://transformer-circuits.pub/2023/july-update/index.html) [[31] Anthropic. 2023c. Model card and evaluations for claude models. https://www-files.anthropic](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf) [.com/production/images/Model-Card-Claude-2.pdf.](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf) [[32] anthropic. 2024. Sycophancy to subterfuge: Investigating reward tampering in language models. https:](https://www.anthropic.com/research/reward-tampering) [//www.anthropic.com/research/reward-tampering.](https://www.anthropic.com/research/reward-tampering) [33] Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, et al. 2024. Foundational challenges in assuring alignment and safety of large language models. _arXiv preprint arXiv:2404.09932_ . [[34] ARC Evals. 2023. Update on ARC’s recent eval efforts. https://evals.alignment.org/blog/2](https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/) [023-03-18-update-on-recent-evals/.](https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/) [35] Sercan Ö Arik and Tomas Pfister. 2021. Tabnet: Attentive interpretable tabular learning. In _Proceedings of_ _the AAAI conference on artificial intelligence_, volume 35(8), pages 6679–6687. [36] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant risk minimization. _arXiv preprint arXiv:1907.02893_ . [37] Konstantine Arkoudas, Selmer Bringsjord, and Paul Bello. 2005. Toward ethical robots via mechanized deontic logic. In _AAAI fall symposium on machine ethics_, pages 17–23. The AAAI Press Menlo Park, CA, USA. [[38] Stuart Armstrong. 2019. problems with ai debate. https://www.alignmentforum.org/posts/f](https://www.alignmentforum.org/posts/fNTCveSa4HvqvZR2F/problems-with-ai-debate) [NTCveSa4HvqvZR2F/problems-with-ai-debate.](https://www.alignmentforum.org/posts/fNTCveSa4HvqvZR2F/problems-with-ai-debate) [39] Stuart Armstrong, Nick Bostrom, and Carl Shulman. 2016. Racing to the precipice: a model of artificial intelligence development. _AI & society_, 31:201–206. [40] Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. Linear algebraic structure of word senses, with applications to polysemy. _Transactions of the Association for Computational Linguistics_, 6:483–495. [41] Saurabh Arora and Prashant Doshi. 2021. A survey of inverse reinforcement learning: Challenges, methods and progress. _Artificial Intelligence_, 297:103500. [42] Kenneth J Arrow. 2012. _Social choice and individual values_, volume 12. Yale university press. [[43] Asimov. 1942. Asimov’s laws. https://webhome.auburn.edu/~vestmon/robotics.html.](https://webhome.auburn.edu/~vestmon/robotics.html) 65 References [44] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A general language assistant as a laboratory for alignment. _arXiv preprint arXiv:2112.00861_ . [45] Karl Johan Åström and Richard M Murray. 2021. _Feedback systems: an introduction for scientists and engi-_ _neers_ . Princeton university press. [46] Karl Johan Åström and Björn Wittenmark. 2008. _Adaptive control_ . Courier Corporation. [47] Muhammad Atif, Muhammad Shafiq, Muhammad Farooq, Gohar Ayub, Mujeeb Hussain, and Muhammad Waqas. 2022. Evolution of basic human values orientations: An application of monitoring changes in cluster solutions. _PloS one_, 17(9):e0274600. [[48] Atomium-EISMD. 2023. Ai4people. https://www.eismd.eu/ai4people.](https://www.eismd.eu/ai4people) [49] Alexandre Attia and Sharone Dayan. 2018. Global overview of imitation learning. _arXiv preprint_ _arXiv:1801.06503_ . [50] Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2018. The moral machine experiment. _Nature_, 563(7729):59–64. [51] Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. 2023. A general theoretical paradigm to understand learning from human preferences. _arXiv_ _preprint arXiv:2310.12036_ . [52] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_ . [53] Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. 2021. Recent advances in adversarial training for adversarial robustness. In _Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence,_ _IJCAI-21_, pages 4312–4321. International Joint Conferences on Artificial Intelligence Organization. Survey Track. [54] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_ . [55] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. _arXiv preprint arXiv:2212.08073_ . [56] Michael Bain and Claude Sammut. 1995. A framework for behavioural cloning. In _Machine Intelligence 15_, pages 103–129. [57] Andrea Bajcsy, Dylan P Losey, Marcia K O’Malley, and Anca D Dragan. 2018. Learning from physical human corrections, one feature at a time. In _Proceedings of the 2018 ACM/IEEE International Conference on_ _Human-Robot Interaction_, pages 141–149. [58] Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. 2022. Video pretraining (vpt): Learning to act by watching unlabeled online videos. _Advances in Neural Information Processing Systems_, 35:24639–24654. [59] Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, et al. 2022. Fine-tuning language models to find agreement among humans with diverse preferences. _Advances in Neural Information Processing Systems_, 35:38176–38189. [60] Paul Bakker, Yasuo Kuniyoshi, et al. 1996. Robot see, robot do: An overview of robot imitation. In _AISB96_ _Workshop on Learning in Robots and Animals_, volume 5. [61] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. In _Proceedings of the 13th_ _International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific_ _Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 675–718, Nusa Dua, Bali. Association for Computational Linguistics. [62] Yamini Bansal, Preetum Nakkiran, and Boaz Barak. 2021. Revisiting model stitching to compare neural representations. _Advances in Neural Information Processing Systems_, 34:225–236. 66 References [[63] Beth Barnes. 2020. debate-update-obfuscated-arguments-problem. https://www.alignmentforum.o](https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem) [rg/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem.](https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem) [64] Feras A Batarseh, Laura Freeman, and Chih-Hao Huang. 2021. A survey on artificial intelligence assurance. _Journal of Big Data_, 8(1):60. [65] Sara Beery, Grant Van Horn, and Pietro Perona. 2018. Recognition in terra incognita. In _Proceedings of the_ _European conference on computer vision (ECCV)_, pages 456–473. [66] Mark Beliaev, Andy Shih, Stefano Ermon, Dorsa Sadigh, and Ramtin Pedarsani. 2022. Imitation learning by estimating expertise of demonstrators. In _International Conference on Machine Learning_, pages 1732–1748. PMLR. [67] Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. _Computational Linguis-_ _tics_, 48(1):207–219. [68] Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, et al. 2018. Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. _arXiv preprint_ _arXiv:1810.01943_ . [69] Nora Belrose, Zach Furman, Logan Smith, Danny Halawi, Igor Ostrovsky, Lev McKinney, Stella Biderman, and Jacob Steinhardt. 2023. Eliciting latent predictions from transformers with the tuned lens. _arXiv preprint_ _arXiv:2303.08112_ . [70] Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. 2009. _Robust optimization_, volume 28. Princeton university press. [71] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In _Proceedings of the 2021 ACM conference on fairness,_ _accountability, and transparency_, pages 610–623. [[72] Yoshua Bengio. 2023. How rogue ais may arise. https://yoshuabengio.org/2023/05/22/ho](https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise) [w-rogue-ais-may-arise.](https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise) [73] Yoshua Bengio, Stuart Russell, Elon Musk, and Future of Life Institute. 2023. Pause giant ai experiments: An [open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments.](https://futureoflife.org/open-letter/pause-giant-ai-experiments) [74] Tsvi Benson-Tilsen and Nate Soares. 2016. Formalizing convergent instrumental goals. In _AAAI Workshop:_ _AI, Ethics, and Society_ . [75] Gregory Benton, Wesley Maddox, Sanae Lotfi, and Andrew Gordon Gordon Wilson. 2021. Loss surface simplexes for mode connecting volumes and fast ensembling. In _International Conference on Machine Learning_, pages 769–779. PMLR. [76] Hugo Berg, Siobhan Hall, Yash Bhalgat, Hannah Kirk, Aleksandar Shtedritski, and Max Bain. 2022. A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning. In _Proceedings_ _of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th_ _International Joint Conference on Natural Language Processing_, pages 806–822. [77] Stuart Berg, Dominik Kutra, Thorben Kroeger, Christoph N Straehle, Bernhard X Kausler, Carsten Haubold, Martin Schiegg, Janez Ales, Thorsten Beier, Markus Rudy, et al. 2019. Ilastik: interactive machine learning for (bio) image analysis. _Nature methods_, 16(12):1226–1232. [78] Lukas Berglund, Asa Cooper Stickland, Mikita Balesni, Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans. 2023. Taken out of context: On measuring situational awareness in llms. _arXiv_ _preprint arXiv:2309.00667_ . [79] Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A convex framework for fair regression. _arXiv preprint arXiv:1706.02409_ . [80] Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2021. Fairness in criminal justice risk assessments: The state of the art. _Sociological Methods & Research_, 50(1):3–44. [81] Fiona Berreby, Gauvain Bourgne, and Jean-Gabriel Ganascia. 2017. A declarative modular framework for representing and applying ethical principles. In _16th Conference on Autonomous Agents and MultiAgent Sys-_ _tems_ . [82] Omar Besbes, Will Ma, and Omar Mouchtaki. 2022. Beyond IID: data-driven decision-making in heterogeneous environments. _Advances in Neural Information Processing Systems_, 35:23979–23991. 67 References [[83] Paul Christiano Beth Barnes. 2020. writeup-progress-on-ai-safety-via-debate-1. https://www.alignm](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) [entforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-d](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) [ebate-1.](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) [84] Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, and DA Forsyth. 2019. Unrestricted adversarial examples via semantic manipulation. In _International Conference on Learning Representations_ . [85] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. 2023. Accurate medium-range global weather forecasting with 3d neural networks. _Nature_, 619(7970):533–538. [86] Zhu Ming Bi, Chaomin Luo, Zhonghua Miao, Bing Zhang, WJ Zhang, and Lihui Wang. 2021. Safety assurance mechanisms of collaborative robotic systems in manufacturing. _Robotics and Computer-Integrated_ _Manufacturing_, 67:102022. [[87] Richard Blumenthal and Josh Hawley. 2023. Bipartisan framework for u.s. ai act. https://www.blumen](https://www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf) [thal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf.](https://www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf) [88] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. _Advances in Neural Informa-_ _tion Processing Systems_, 29. [89] Eric Bonabeau. 2002. Agent-based modeling: Methods and techniques for simulating human systems. _Pro-_ _ceedings of the national academy of sciences_, 99(suppl_3):7280–7287. [90] Nick Bostrom. 2012. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. _Minds and Machines_, 22:71–85. [91] Nick Bostrom. 2013. Existential risk prevention as global priority. _Global Policy_, 4(1):15–31. [92] Nick Bostrom and Milan M Cirkovic. 2011. _Global catastrophic risks_ . Oxford University Press, USA. [93] Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In _2021 IEEE Symposium on Security and_ _Privacy (SP)_, pages 141–159. IEEE. [94] Samuel R Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamil˙e Lukoši¯ut˙e, Amanda Askell, Andy Jones, Anna Chen, et al. 2022. Measuring progress on scalable oversight for large language models. _arXiv preprint arXiv:2211.03540_ . [95] Hamed Bozorgi and Trung Dung Ngo. 2023. Beyond shared autonomy: Joint perception and action for humanin-the-loop mobile robot navigation systems. _Journal of Intelligent & Robotic Systems_, 109(1):20. [96] Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_, 39(3/4):324–345. [97] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D Procaccia. 2016. _Handbook of com-_ _putational social choice_ . Cambridge University Press. [98] Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, et al. 2023. Towards monosemanticity: Decomposing language models with dictionary learning. _Transformer Circuits Thread_, page 2. [99] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. _arXiv preprint arXiv:1606.01540_ . [100] Daniel Brown, Russell Coleman, Ravi Srinivasan, and Scott Niekum. 2020a. Safe imitation learning via fast bayesian reward inference from preferences. In _International Conference on Machine Learning_, pages 1165–1177. PMLR. [101] Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. 2019. Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations. In _International Conference on_ _Machine Learning (ICML)_, pages 783–792. [102] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020b. Language models are few-shot learners. _Advances in Neural Information Processing Systems_, 33:1877–1901. [103] Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, et al. 2020. Toward trustworthy ai development: mechanisms for supporting verifiable claims. _arXiv preprint arXiv:2004.07213_ . 68 References [104] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. _arXiv preprint arXiv:2303.12712_ . [105] Alexander Bukharin, Yixiao Li, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2023. Deep reinforcement learning from hierarchical weak preference feedback. _arXiv preprint arXiv:2309.02632_ . [106] Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In _Conference on fairness, accountability and transparency_, pages 77–91. PMLR. [107] Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. 2023. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. _arXiv preprint arXiv:2312.09390_ . [108] Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. 2022. Discovering latent knowledge in language models without supervision. In _The Eleventh International Conference on Learning Representations_ . [109] Lucian Bu¸soniu, Tim De Bruin, Domagoj Toli´c, Jens Kober, and Ivana Palunko. 2018. Reinforcement learning for control: Performance, stability, and deep approximators. _Annual Reviews in Control_, 46:8–28. [110] Serkan Cabi, Sergio Gómez Colmenarejo, Alexander Novikov, Ksenia Konyushkova, Scott E. Reed, Rae Jeong, Konrad Zolna, Yusuf Aytar, David Budden, Mel Vecerík, Oleg Sushkov, David Barker, Jonathan Scholz, Misha Denil, Nando de Freitas, and Ziyu Wang. 2020. Scaling data-driven robotics with reward sketching and batch reinforcement learning. In _Robotics: Science and Systems XVI, Virtual Event / Corvalis, Oregon, USA,_ _July 12-16, 2020_ . [111] Ángel Alexander Cabrera, Erica Fu, Donald Bertucci, Kenneth Holstein, Ameet Talwalkar, Jason I Hong, and Adam Perer. 2023. Zeno: An interactive framework for behavioral evaluation of machine learning. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_, pages 1–14. [112] Samuel Cahyawijaya, Delong Chen, Yejin Bang, Leila Khalatbari, Bryan Wilie, Ziwei Ji, Etsuko Ishii, and Pascale Fung. 2024. High-dimension human value representation in large language models. _arXiv preprint_ _arXiv:2404.07900_ . [[113] CAIS. 2023. Center for ai safety: Statement on ai risk. https://www.safe.ai/statement-on-a](https://www.safe.ai/statement-on-ai-risk) [i-risk.](https://www.safe.ai/statement-on-ai-risk) [114] Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. _Science_, 356(6334):183–186. [115] Rafael A Calvo, Dorian Peters, and Stephen Cave. 2020. Advancing impact assessment for intelligent systems. _Nature Machine Intelligence_, 2(2):89–91. [[116] Ella Cao and Eduardo Baptista. 2023. ’deepfake’ scam in china fans worries over ai-driven fraud.](https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/) _Reuters_ . [117] Nicholas Carlini, Milad Nasr, Christopher A Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei W Koh, Daphne Ippolito, Florian Tramer, and Ludwig Schmidt. 2024. Are aligned neural networks adversarially aligned? _Advances in Neural Information Processing Systems_, 36. [118] Joseph Carlsmith. 2022. Is power-seeking ai an existential risk? _arXiv preprint arXiv:2206.13353_ . [119] Tom Carlson and Yiannis Demiris. 2010. Increasing robotic wheelchair safety with collaborative control: Evidence from secondary task experiments. In _2010 IEEE International Conference on Robotics and Automation_, pages 5582–5587. IEEE. [120] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. 2019. Unlabeled data improves adversarial robustness. _Advances in Neural Information Processing Systems_, 32. [[121] Andrew Carr. 2023. Teaching large language models to zip their lips. https://gretel.ai/blog/t](https://gretel.ai/blog/teaching-large-language-models-to-zip-their-lips) [eaching-large-language-models-to-zip-their-lips.](https://gretel.ai/blog/teaching-large-language-models-to-zip-their-lips) [[122] Andres Carranza, Dhruv Pai, Rylan Schaeffer, Arnuv Tandon, and Sanmi Koyejo. 2023. Deceptive alignment](http://arxiv.org/abs/2307.10569) [monitoring.](http://arxiv.org/abs/2307.10569) [123] Micah Carroll, Alan Chan, Henry Ashton, and David Krueger. 2023. Characterizing manipulation from ai systems. In _Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and_ _Optimization_ . 69 References [124] Micah D Carroll, Anca Dragan, Stuart Russell, and Dylan Hadfield-Menell. 2022. Estimating and penalizing induced preference shifts in recommender systems. In _International Conference on Machine Learning_, pages 2686–2708. PMLR. [125] Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, and Chris Olah. 2019. Activation atlas. _Distill_, 4(3):e15. [126] Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In _Proceedings of the 21th_ _ACM SIGKDD international conference on knowledge discovery and data mining_, pages 1721–1730. [127] Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. _Electronics_, 8(8):832. [[128] Stephen Casper. 2023. Moving Forward: 11th post of The Engineer’s Interpretability Sequence. https:](https://www.alignmentforum.org/posts/L5Rua9aTndviy8dvc/eis-xi-moving-forward) [//www.alignmentforum.org/posts/L5Rua9aTndviy8dvc/eis-xi-moving-forward.](https://www.alignmentforum.org/posts/L5Rua9aTndviy8dvc/eis-xi-moving-forward) [129] Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, and Dylan HadfieldMenell. 2023a. Red teaming deep neural networks with feature synthesis tools. In _Thirty-seventh Conference_ _on Neural Information Processing Systems_ . [130] Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphael Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell. 2023b. Open problems and fundamental limitations of reinforcement learning from human feedback. _Transac-_ _tions on Machine Learning Research_ . Survey Certification. [131] Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, and Dylan Hadfield-Menell. 2023c. Explore, establish, exploit: Red teaming language models from scratch. _arXiv preprint arXiv:2306.09442_ . [132] Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, and Gabriel Kreiman. 2022. Robust feature-level adversaries are interpretability tools. _Advances in Neural Information Processing Systems_, 35:33093–33106. [133] Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. _arXiv_ _preprint arXiv:2006.14799_ . [134] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2021. A survey on adversarial attacks and defences. _CAAI Transactions on Intelligence Technology_, 6(1):25– 45. [135] Souradip Chakraborty, Amrit Bedi, Alec Koppel, Huazheng Wang, Dinesh Manocha, Mengdi Wang, and [Furong Huang. 2024. PARL: A unified framework for policy alignment in reinforcement learning. In](https://openreview.net/forum?id=ByR3NdDSZB) _The_ _Twelfth International Conference on Learning Representations_ . [136] Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan, Micah Carroll, et al. 2023. Harms from increasingly agentic algorithmic systems. In _Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency_, pages 651–666. [137] Raja Chatila and John C Havens. 2019. The ieee global initiative on ethics of autonomous and intelligent systems. _Robotics and well-being_, pages 11–16. [[138] Pablo Chavez. 2023. An ai challenge: Balancing open and closed systems. https://cepa.org/artic](https://cepa.org/article/an-ai-challenge-balancing-open-and-closed-systems) [le/an-ai-challenge-balancing-open-and-closed-systems.](https://cepa.org/article/an-ai-challenge-balancing-open-and-closed-systems) [139] Hila Chefer, Shir Gur, and Lior Wolf. 2021. Transformer interpretability beyond attention visualization. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 782–791. [140] Angelica Chen, Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Samuel R. Bowman, [Kyunghyun Cho, and Ethan Perez. 2024a. Learning from natural language feedback.](https://openreview.net/forum?id=xo3hI5MwvU) _Transactions on Machine_ _Learning Research_ . [[141] Canyu Chen and Kai Shu. 2024. Can LLM-generated misinformation be detected? In](https://openreview.net/forum?id=ccxD4mtkTU) _The Twelfth Interna-_ _tional Conference on Learning Representations_ . [142] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. _arXiv preprint arXiv:2107.03374_ . 70 References [143] Zhaoyu Chen, Bo Li, Shuang Wu, Kaixun Jiang, Shouhong Ding, and Wenqiang Zhang. 2024b. Contentbased unrestricted adversarial attack. _Advances in Neural Information Processing Systems_, 36. [144] Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, and Cho-Jui Hsieh. 2020. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. _Proceedings of the AAAI Conference on_ _Artificial Intelligence_, 34(04):3601–3608. [145] Weiwei Cheng, Eyke Hüllermeier, and Krzysztof J Dembczynski. 2010a. Graded multilabel classification: The ordinal case. In _Proceedings of the 27th international conference on machine learning (ICML-10)_, pages 223–230. [146] Weiwei Cheng, Eyke Hüllermeier, and Krzysztof J Dembczynski. 2010b. Label ranking methods based on the plackett-luce model. In _Proceedings of the 27th International Conference on Machine Learning (ICML-10)_, pages 215–222. [147] Weiwei Cheng, Michaël Rademaker, Bernard De Baets, and Eyke Hüllermeier. 2010c. Predicting partial orders: ranking with abstention. In _Machine Learning and Knowledge Discovery in Databases: European_ _Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part I 21_, pages 215– 230. Springer. [148] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. _See https://vicuna. lmsys. org (accessed 14 April 2023)_ . [149] Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. 2019. On the weaknesses of reinforcement learning for neural machine translation. _arXiv preprint arXiv:1907.01752_ . [150] Brian Christian. 2020. _The alignment problem: Machine learning and human values_ . WW Norton & Company. [[151] Paul Christiano. 2019. What failure looks like. https://www.alignmentforum.org/posts/HBx](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) [e6wdjxK239zajf/what-failure-looks-like.](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) [[152] Paul Christiano. 2022. Approval-directed agents. https://www.alignmentforum.org/posts/7](https://www.alignmentforum.org/posts/7Hr8t6xwuuxBTqADK/approval-directed-agents-1) [Hr8t6xwuuxBTqADK/approval-directed-agents-1.](https://www.alignmentforum.org/posts/7Hr8t6xwuuxBTqADK/approval-directed-agents-1) [[153] Paul Christiano. 2023. Thoughts on the impact of rlhf research. https://www.lesswrong.com/po](https://www.lesswrong.com/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research) [sts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research.](https://www.lesswrong.com/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research) [154] Paul Christiano, Buck Shlegeris, and Dario Amodei. 2018. Supervising strong learners by amplifying weak experts. _arXiv preprint arXiv:1810.08575_ . [155] Paul Christiano, Mark Xu, and Ajeya Cotra. 2021. Arc’s first technical report: Eliciting latent knowledge. [https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-techn](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) [ical-report-eliciting-latent-knowledge.](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) [156] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. _Advances in Neural Information Processing Systems_, 30. [157] Phillip JK Christoffersen, Andreas A Haupt, and Dylan Hadfield-Menell. 2023. Get it in writing: Formal contracts mitigate social dilemmas in multi-agent rl. In _Proceedings of the 2023 International Conference on_ _Autonomous Agents and Multiagent Systems_, pages 448–456. [158] Bilal Chughtai, Lawrence Chan, and Neel Nanda. 2023. A toy model of universality: Reverse engineering how networks learn group operations. _arXiv preprint arXiv:2302.03025_ . [159] Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert’s attention. In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and_ _Interpreting Neural Networks for NLP_, pages 276–286. [[160] Ruby RobertM GPT-4 Claude. 2023. New lw feature debates. https://www.lesswrong.com/post](https://www.lesswrong.com/posts/kXiAGRWFquXFMi68Y/new-lw-feature-debates) [s/kXiAGRWFquXFMi68Y/new-lw-feature-debates.](https://www.lesswrong.com/posts/kXiAGRWFquXFMi68Y/new-lw-feature-debates) [[161] Code Bullet. 2019. Simulator with bugs. https://www.youtube.com/watch?v=K-wIZuAA3EY.](https://www.youtube.com/watch?v=K-wIZuAA3EY) [[162] Collective Intelligence Project. 2023. Introducing the collective intelligence project. https://cip.or](https://cip.org/whitepaper) [g/whitepaper.](https://cip.org/whitepaper) 71 References [163] Vincent Conitzer, Walter Sinnott-Armstrong, Jana Schaich Borg, Yuan Deng, and Max Kramer. 2017. Moral decision making frameworks for artificial intelligence. In _Proceedings of the AAAI Conference on Artificial_ _Intelligence_, volume 31. [164] Arthur Conmy, Augustine N Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. 2023. Towards automated circuit discovery for mechanistic interpretability. _arXiv preprint arXiv:2304.14997_ . [165] Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. 2009. L2 regularization for learning kernels. In _Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009_, pages 109–116. AUAI Press. [166] Ajeya Cotra. 2018. Iterated distillation and amplification. [[167] Ajeya Cotra. 2021. The case for aligning narrowly superhuman models. https://www.alignmentf](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) [orum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuma](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) [n-models.](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models) [168] Ajeya Cotra. 2022. Without specific countermeasures, the easiest path to transformative AI likely leads to AI [takeover - AI Alignment Forum. https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) [6H/without-specific-countermeasures-the-easiest-path-to.](https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) [169] Andrew Critch and David Krueger. 2020. Ai research considerations for human existential safety (arches). _arXiv preprint arXiv:2006.04948_ . [170] Andrew Critch and Stuart Russell. 2023. Tasra: A taxonomy and analysis of societal-scale risks from ai. _arXiv preprint arXiv:2306.06924_ . [171] Diogo Cruz, José Aleixo Cruz, and Henrique Lopes Cardoso. 2019. Reinforcement learning in multi-agent games: Open ai gym diplomacy environment. In _Progress in Artificial Intelligence: 19th EPIA Conference_ _on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3–6, 2019, Proceedings, Part I 19_, pages 49–60. Springer. [172] Brandon Cui, Hengyuan Hu, Luis Pineda, and Jakob Foerster. 2021. K-level reasoning for zero-shot coordination in hanabi. _Advances in Neural Information Processing Systems_, 34:8215–8228. [173] Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. 2023. Sparse autoencoders find highly interpretable features in language models. _arXiv preprint arXiv:2309.08600_ . [174] Allan Dafoe, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. 2021. Cooperative ai: machines must learn to find common ground. _Nature_, 593(7857):33–36. [175] Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R McKee, Joel Z Leibo, Kate Larson, and Thore Graepel. 2020. Open problems in cooperative ai. _arXiv preprint arXiv:2012.08630_ . [176] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In _Proceedings of the 60th Annual Meeting of the Association for Computational_ _Linguistics (Volume 1: Long Papers)_, pages 8493–8502. [177] Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. 2023. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers. In _ICLR 2023_ _Workshop on Mathematical and Empirical Understanding of Foundation Models_ . [[178] Josef Dai, Tianle Chen, Xuyao Wang, Ziran Yang, Taiye Chen, Jiaming Ji, and Yaodong Yang. 2024a. Safe-](http://arxiv.org/abs/2406.14477) [sora: Towards safety alignment of text2video generation via a human preference dataset.](http://arxiv.org/abs/2406.14477) [179] Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. 2024b. Safe rlhf: Safe reinforcement learning from human feedback. In _The Twelfth International Conference_ _on Learning Representations_ . [180] Brian d’Alessandro, Cathy O’Neil, and Tom LaGatta. 2017. Conscientious classification: A data scientist’s guide to discrimination-aware classification. _Big data_, 5(2):120–134. [[181] David Dalrymple. 2024. Safeguarded ai: Constructing guaranteed safety. Version 1.2.](https://www.aria.org.uk/wp-content/uploads/2024/01/ARIA-Safeguarded-AI-Programme-Thesis-V1.pdf) [182] David Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, et al. 2024. Towards guaranteed safe ai: A framework for ensuring robust and reliable ai systems. _arXiv preprint arXiv:2405.06624_ . 72 References [183] Corentin Dancette, Remi Cadene, Damien Teney, and Matthieu Cord. 2021. Beyond question-based biases: Assessing multimodal shortcut learning in visual question answering. In _Proceedings of the IEEE/CVF Inter-_ _national Conference on Computer Vision_, pages 1574–1583. [184] Richard Danzig. 2012. Aum shinrikyo: insights into how terrorists develop biological and chemical weapons. _Studies in Conflict & Terrorism_ . [185] Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. 2023. Analyzing transformers in embedding space. In _Annual Meeting of the Association for Computational Linguistics_ . [186] Sudeep Dasari, Abhinav Gupta, and Vikash Kumar. 2023. Learning Dexterous Manipulation from Exemplar Object Trajectories and Pre-Grasps. In _2023 IEEE International Conference on Robotics and Automation_ _(ICRA)_ . IEEE. [187] Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models: A simple approach to controlled text generation. In _International Conference on Learning Representations_ . [188] Pim De Haan, Dinesh Jayaraman, and Sergey Levine. 2019. Causal confusion in imitation learning. _Ad-_ _vances in Neural Information Processing Systems_, 32. [189] Scott De Marchi and Scott E Page. 2014. Agent-based models. _Annual Review of political science_, 17:1–20. [[190] DeepMind. 2018. Building safe artificial intelligence: specification, robustness, and assurance. https:](https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1) [//deepmindsafetyresearch.medium.com/building-safe-artificial-intelligenc](https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1) [e-52f5f75058f1.](https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1) [[191] DeepMind. 2020. goal misgeneralization. https://docs.google.com/spreadsheets/u/1/d](https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vTo3RkXUAigb25nP7gjpcHriR6XdzA_L5loOcVFj_u7cRAZghWrYKH2L2nU4TA_Vr9KzBX5Bjpz9G_l/pubhtml?pli=1) [/e/2PACX-1vTo3RkXUAigb25nP7gjpcHriR6XdzA_L5loOcVFj_u7cRAZghWrYKH2L2nU4TA_V](https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vTo3RkXUAigb25nP7gjpcHriR6XdzA_L5loOcVFj_u7cRAZghWrYKH2L2nU4TA_Vr9KzBX5Bjpz9G_l/pubhtml?pli=1) [r9KzBX5Bjpz9G_l/pubhtml?pli=1.](https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vTo3RkXUAigb25nP7gjpcHriR6XdzA_L5loOcVFj_u7cRAZghWrYKH2L2nU4TA_Vr9KzBX5Bjpz9G_l/pubhtml?pli=1) [192] Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de Las Casas, et al. 2022. Magnetic control of tokamak plasmas through deep reinforcement learning. _Nature_, 602(7897):414–419. [193] Gelei Deng, Yi Liu, Yuekang Li, Kailong Wang, Ying Zhang, Zefeng Li, Haoyu Wang, Tianwei Zhang, and Yang Liu. 2023a. Jailbreaker: Automated jailbreak across multiple large language model chatbots. _arXiv_ _preprint arXiv:2307.08715_ . [194] Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. 2022. Rlprompt: Optimizing discrete text prompts with reinforcement learning. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_, pages 3369–3391. [195] Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, and Lidong Bing. 2023b. Multilingual jailbreak challenges in large language models. _arXiv preprint arXiv:2310.06474_ . [196] Louise Dennis, Michael Fisher, Marija Slavkovik, and Matt Webster. 2016. Formal verification of ethical choices in autonomous systems. _Robotics and Autonomous Systems_, 77:1–14. [197] Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch, and Sergey Levine. 2020. Emergent complexity and zero-shot transfer via unsupervised environment design. _Ad-_ _vances in Neural Information Processing Systems_, 33:13049–13061. [198] Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2020. Eraser: A benchmark to evaluate rationalized nlp models. In _Proceedings of the 58th_ _Annual Meeting of the Association for Computational Linguistics_, pages 4443–4458. [199] Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In _Proceedings of the 57th_ _Annual Meeting of the Association for Computational Linguistics_, pages 4884–4895. [200] Lauro Langosco Di Langosco, Jack Koch, Lee D Sharkey, Jacob Pfau, and David Krueger. 2022. Goal misgeneralization in deep reinforcement learning. In _International Conference on Machine Learning_, pages 12004–12019. PMLR. [201] Thomas G Dietterich. 2017. Steps toward robust artificial intelligence. _AI Magazine_, 38(3):3–24. [202] Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, KaShun SHUM, and Tong Zhang. 2023. RAFT: Reward ranked finetuning for generative foundation model alignment. _Transactions on Machine Learning Research_ . 73 References [203] Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. _arXiv preprint arXiv:2301.00234_ . [204] Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. _arXiv preprint arXiv:1702.08608_ . [205] Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred Hamprecht. 2018. Essentially no barriers in neural network energy landscape. In _International conference on machine learning_, pages 1309–1318. PMLR. [206] Mengnan Du, Ninghao Liu, and Xia Hu. 2019. Techniques for interpretable machine learning. _Communica-_ _tions of the ACM_, 63(1):68–77. [207] Yali Du. 2023. Cooperative multi-agent learning in a complex world: challenges and solutions. In _Proceed-_ _ings of the AAAI Conference on Artificial Intelligence_, volume 37(13), pages 15436–15436. [208] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improving factuality and reasoning in language models through multiagent debate. _arXiv preprint arXiv:2305.14325_ . [209] Veljko Dubljevic. 2020. Toward implementing the agent-deed-consequence model of moral judgment in autonomous vehicles. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_, pages 243–243. [210] John C Duchi, Peter W Glynn, and Hongseok Namkoong. 2021. Statistics of robust optimization: A generalized empirical likelihood approach. _Mathematics of Operations Research_, 46(3):946–969. [211] John C Duchi, Lester W Mackey, and Michael I Jordan. 2010. On the consistency of ranking algorithms. In _ICML_, pages 327–334. [212] Nadir Durrani, Hassan Sajjad, Fahim Dalvi, and Yonatan Belinkov. 2020. Analyzing individual neurons in pre-trained language models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural_ _Language Processing (EMNLP)_, pages 4865–4880. [213] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_ _(Volume 2: Short Papers)_, pages 31–36. [214] Mark Edmonds, Feng Gao, Xu Xie, Hangxin Liu, Siyuan Qi, Yixin Zhu, Brandon Rothrock, and Song-Chun Zhu. 2017. Feeling the force: Integrating force and pose for fluent discovery through imitation learning to open medicine bottles. In _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 3530–3537. IEEE. [215] Ronen Eldan and Mark Russinovich. 2023. Who’s harry potter? approximate unlearning in llms. _arXiv_ _preprint arXiv:2310.02238_ . [216] Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, et al. 2022. Toy models of superposition. _arXiv_ _preprint arXiv:2209.10652_ . [217] Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, et al. 2021. A mathematical framework for transformer circuits. _Trans-_ _former Circuits Thread_, 1. [218] Daniel C Elton. 2020. Self-explaining ai as an alternative to interpretable ai. In _Artificial General Intelli-_ _gence: 13th International Conference, AGI 2020, St. Petersburg, Russia, September 16–19, 2020, Proceedings_ _13_, pages 95–106. Springer. [219] Denis Emelin, Ronan Le Bras, Jena D Hwang, Maxwell Forbes, and Yejin Choi. 2021. Moral stories: Situated reasoning about norms, intents, actions, and their consequences. In _Proceedings of the 2021 Conference on_ _Empirical Methods in Natural Language Processing_, pages 698–718. [220] Logan Engstrom, Andrew Ilyas, Hadi Salman, Shibani Santurkar, and Dimitris Tsipras. 2019a. Robustness [(python library). https://github.com/MadryLab/robustness.](https://github.com/MadryLab/robustness) [221] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, and Aleksander Madry. 2020. Identifying statistical bias in dataset replication. In _International Conference on Machine Learn-_ _ing_, pages 2922–2932. PMLR. [222] Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. 2019b. Exploring the landscape of spatial robustness. In _International conference on machine learning_, pages 1802–1811. PMLR. 74 References [223] Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer features of a deep network. _University of Montreal_, 1341(3):1. [224] Eva Erman and Markus Furendal. 2022. Artificial intelligence and the political legitimacy of global governance. _Political Studies_, page 00323217221126665. [225] Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, et al. 2022. Concept embedding models: Beyond the accuracy-explainability trade-off. _Advances in Neural Information Processing_ _Systems_, 35:21400–21413. [[226] European Parliament. 2023. Eu ai act: first regulation on artificial intelligence. https://www.europa](https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence) [rl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-r](https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence) [egulation-on-artificial-intelligence.](https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence) [[227] Evan Hubinger. 2023. Bing chat is blatantly, aggressively misaligned. https://www.lesswrong.co](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned) [m/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned.](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned) [228] Tom Everitt and Marcus Hutter. 2016. Avoiding wireheading with value reinforcement learning. In _Arti-_ _ficial General Intelligence: 9th International Conference, AGI 2016, New York, NY, USA, July 16-19, 2016,_ _Proceedings 9_, pages 12–22. Springer. [229] Tom Everitt, Marcus Hutter, Ramana Kumar, and Victoria Krakovna. 2021. Reward tampering problems and solutions in reinforcement learning: A causal influence diagram perspective. _Synthese_, 198(Suppl 27):6435– 6467. [230] Tom Everitt, Victoria Krakovna, Laurent Orseau, and Shane Legg. 2017. Reinforcement learning with a corrupted reward channel. In _Proceedings of the 26th International Joint Conference on Artificial Intelligence_, pages 4705–4713. [231] Tom Everitt, Gary Lea, and Marcus Hutter. 2018. Agi safety literature review. In _Proceedings of the Twenty-_ _Seventh International Joint Conference on Artificial Intelligence_, pages 5441–5449. International Joint Conferences on Artificial Intelligence Organization. [232] Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, and Sergey Levine. 2018. Leave no trace: Learning to reset for safe and autonomous reinforcement learning. In _International Conference on Learning Representations_ . [[233] Daniel Fabian. 2023. Google’s ai red team: the ethical hackers making ai safer. https://blog.googl](https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer) [e/technology/safety-security/googles-ai-red-team-the-ethical-hackers-mak](https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer) [ing-ai-safer.](https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer) [234] Jerry Alan Fails and Dan R Olsen Jr. 2003. Interactive machine learning. In _Proceedings of the 8th interna-_ _tional conference on Intelligent user interfaces_, pages 39–45. [235] Diplomacy Team FAIR, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. 2022. Human-level play in the game of diplomacy by combining language models with strategic reasoning. _Science_, 378(6624):1067–1074. [236] Tobias Falke, Leonardo FR Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In _Proceedings of the 57th annual meeting of the association for computational linguistics_, pages 2214–2220. [237] Feng-Lei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang. 2021. On interpretability of artificial neural networks: A survey. _IEEE Transactions on Radiation and Plasma Medical Sciences_, 5(6):741–760. [238] Bin Fang, Shidong Jia, Di Guo, Muhua Xu, Shuhuan Wen, and Fuchun Sun. 2019. Survey of imitation learning for robotic manipulation. _International Journal of Intelligent Robotics and Applications_, 3:362–369. [239] Alhussein Fawzi, Matej Balog, Aja Huang, Thomas Hubert, Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Francisco J R Ruiz, Julian Schrittwieser, Grzegorz Swirszcz, et al. 2022. Discovering faster matrix multiplication algorithms with reinforcement learning. _Nature_, 610(7930):47–53. [240] Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrating (human) feedback for natural language generation. _arXiv preprint arXiv:2305.00955_ . [241] Pedro M Fernandes, Francisco C Santos, and Manuel Lopes. 2020. Adoption dynamics and societal impact of ai systems in complex networks. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_, pages 258–264. 75 References [242] Arnaud Fickinger, Simon Zhuang, Dylan Hadfield-Menell, and Stuart Russell. 2020. Multi-principal assistance games. _arXiv preprint arXiv:2007.09540_ . [243] Tanner Fiez, Benjamin Chasnov, and Lillian Ratliff. 2020. Implicit learning dynamics in stackelberg games: Equilibria characterization, convergence analysis, and empirical study. In _International Conference on Machine_ _Learning_, pages 3133–3144. PMLR. [244] Arduin Findeis, Timo Kaufmann, Eyke Hüllermeier, Samuel Albanie, and Robert Mullins. 2024. Inverse constitutional ai: Compressing preferences into principles. _arXiv preprint arXiv:2406.06560_ . [245] Jaime F Fisac, Monica A Gates, Jessica B Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S Shankar Sastry, Thomas L Griffiths, and Anca D Dragan. 2020. Pragmatic-pedagogic value alignment. In _Robotics Research: The 18th International Symposium ISRR_, pages 49–57. Springer. [246] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, et al. 2021. An ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. _Ethics, governance, and policies in_ _artificial intelligence_, pages 19–39. [247] Jakob Foerster, Ioannis Alexandros Assael, Nando De Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. _Advances in Neural Information Processing Systems_, 29. [248] Jakob N Foerster, Justin Gilmer, Jascha Sohl-Dickstein, Jan Chorowski, and David Sussillo. 2017. Input switched affine networks: An rnn architecture designed for interpretability. In _International conference on_ _machine learning_, pages 1136–1145. PMLR. [249] Ruth Fong and Andrea Vedaldi. 2018. Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks. In _Proceedings of the IEEE conference on computer vision and pattern_ _recognition_, pages 8730–8738. [250] Maxwell Forbes, Jena D Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In _Proceedings of the 2020 Conference on Empirical Methods_ _in Natural Language Processing (EMNLP)_, pages 653–670. [251] Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, and Michael Carbin. 2020. Linear mode connectivity and the lottery ticket hypothesis. In _International Conference on Machine Learning_, pages 3259–3269. PMLR. [252] Daniel Freeman, David Ha, and Luke Metz. 2019. Learning to predict without looking ahead: World models without forward prediction. _Advances in Neural Information Processing Systems_, 32. [253] Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. 2019. From language to goals: Inverse reinforcement learning for vision-based instruction following. In _International Conference on Learning_ _Representations_ . [254] Justin Fu, Katie Luo, and Sergey Levine. 2018a. Learning robust rewards with adverserial inverse reinforcement learning. In _International Conference on Learning Representations_ . [255] Justin Fu, Avi Singh, Dibya Ghosh, Larry Yang, and Sergey Levine. 2018b. Variational inverse control with events: A general framework for data-driven reward definition. _Advances in Neural Information Processing_ _Systems_, 31. [256] Jason Furman and Robert Seamans. 2019. Ai and the economy. _Innovation policy and the economy_, 19(1):161–191. [257] Johannes Fürnkranz and Eyke Hüllermeier. 2003. Pairwise preference learning and ranking. In _European_ _conference on machine learning_, pages 145–156. Springer. [258] Johannes Fürnkranz and Eyke Hüllermeier. 2010. _Preference Learning_ . Springer Science & Business Media. [[259] G20. 2019. G20 ai principles. https://www.mofa.go.jp/policy/economy/g20_summit/o](https://www.mofa.go.jp/policy/economy/g20_summit/osaka19/pdf/documents/en/annex_08.pdf) [saka19/pdf/documents/en/annex_08.pdf.](https://www.mofa.go.jp/policy/economy/g20_summit/osaka19/pdf/documents/en/annex_08.pdf) [260] Iason Gabriel. 2020. Artificial intelligence, values, and alignment. _Minds and Machines_, 30(3):411–437. [261] Iason Gabriel and Vafa Ghazavi. 2021. The challenge of value alignment: From fairer algorithms to ai safety. _arXiv preprint arXiv:2101.06060_ . 76 References [262] Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. _Advances in Neural Information Processing Systems_, 33:6616–6628. [263] Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. _arXiv preprint arXiv:2209.07858_ . [264] Leo Gao, John Schulman, and Jacob Hilton. 2023. Scaling laws for reward model overoptimization. In _International Conference on Machine Learning_, pages 10835–10866. PMLR. [265] Leo Gao, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. 2024. Scaling and evaluating sparse autoencoders. _arXiv preprint arXiv:2406.04093_ . [266] Javier Garcıa and Fernando Fernández. 2015. A comprehensive survey on safe reinforcement learning. _Jour-_ _nal of Machine Learning Research_, 16(1):1437–1480. [267] Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. 2018. Loss surfaces, mode connectivity, and fast ensembling of dnns. _Advances in Neural Information Processing Systems_, 31. [268] Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor. 2016. Logical induction. _arXiv preprint arXiv:1609.03543_ . [269] Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In _Findings of the Association for Com-_ _putational Linguistics: EMNLP 2020_, pages 3356–3369. [270] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. 2019. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In _International Conference on Learning Representations_ . [271] Mor Geva, Avi Caciularu, Guy Dar, Paul Roit, Shoval Sadde, Micah Shlain, Bar Tamir, and Yoav Goldberg. 2022. Lm-debugger: An interactive tool for inspection and intervention in transformer-based language models. In _Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing: System_ _Demonstrations_, pages 12–21. [272] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are keyvalue memories. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 5484–5495. [273] Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. 2020. A divergence minimization perspective on imitation learning methods. In _Conference on Robot Learning_, pages 1259–1277. PMLR. [274] Justin Gilmer, Nicolas Ford, Nicholas Carlini, and Ekin Cubuk. 2019. Adversarial examples are a natural consequence of test error in noise. In _International Conference on Machine Learning_, pages 2280–2289. PMLR. [275] Amelia Glaese, Nat McAleese, Maja Tr˛ebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents via targeted human judgements. _arXiv preprint arXiv:2209.14375_ . [276] Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. _Distill_, 6(3):e30. [277] Josh A Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova. 2023. Generative language models and automated influence operations: Emerging threats and potential mitigations. _arXiv preprint arXiv:2301.04246_ . [278] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_ . [279] Charles AE Goodhart and CAE Goodhart. 1984. _Problems of monetary management: the UK experience_ . Springer. [[280] Google DeepMind. 2024. Gemma scope: Helping the safety community shed light on the inner workings of](https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models/) [language models. Accessed: 2024-09-13.](https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models/) 77 References [281] Government of the United Kingdom. 2021. The roadmap to an effective ai assurance ecosystem - extended [version. https://www.gov.uk/government/publications/the-roadmap-to-an-effec](https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem/the-roadmap-to-an-effective-ai-assurance-ecosystem-extended-version) [tive-ai-assurance-ecosystem/the-roadmap-to-an-effective-ai-assurance-eco](https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem/the-roadmap-to-an-effective-ai-assurance-ecosystem-extended-version) [system-extended-version.](https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem/the-roadmap-to-an-effective-ai-assurance-ecosystem-extended-version) [[282] Government of the United Kingdom. 2023. Frontier ai: capabilities and risks – discussion paper. https:](https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper) [//www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-d](https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper) [iscussion-paper.](https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper) [283] Prasoon Goyal, Scott Niekum, and Raymond J Mooney. 2019. Using natural language for reward shaping in reinforcement learning. In _Proceedings of the 28th International Joint Conference on Artificial Intelligence_, pages 2385–2391. [284] Nico Grant and Karen Weise. 2023. In ai race, microsoft and google choose speed over caution. _The New_ _York Times_ . [285] Ryan Greenblatt, Buck Shlegeris, Kshitij Sachan, and Fabien Roger. 2023. Ai control: Improving safety despite intentional subversion. _arXiv preprint arXiv:2312.06942_ . [286] Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L Isbell, and Andrea L Thomaz. 2013. Policy shaping: Integrating human feedback with reinforcement learning. _Advances in Neural Information Pro-_ _cessing Systems_, 26. [287] Sven Gronauer and Klaus Diepold. 2022. Multi-agent deep reinforcement learning: a survey. _Artificial_ _Intelligence Review_, pages 1–49. [288] Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini, Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, et al. 2023. Studying large language model generalization with influence functions. _arXiv preprint arXiv:2308.03296_ . [289] Luke Guerdan, Amanda Coston, Zhiwei Steven Wu, and Kenneth Holstein. 2023. Ground (less) truth: A causal framework for proxy labels in human-algorithm decision-making. In _Proceedings of the 2023 ACM_ _Conference on Fairness, Accountability, and Transparency_, pages 688–704. [290] Carlos Guestrin, Daphne Koller, and Ronald Parr. 2001. Multiagent planning with factored mdps. _Advances_ _in Neural Information Processing Systems_, 14. [291] Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al. 2023. Reinforced self-training (rest) for language modeling. _arXiv preprint arXiv:2308.08998_ . [[292] Wes Gurnee and Max Tegmark. 2023. Language models represent space and time.](http://arxiv.org/abs/2310.02207) [[293] António Guterres. 2023. Secretary-general’s remarks to the security council on artificial intelligence. http](https://www.un.org/sg/en/content/sg/speeches/2023-07-18/secretary-generals-remarks-the-security-council-artificial-intelligence) [s://www.un.org/sg/en/content/sg/speeches/2023-07-18/secretary-generals-r](https://www.un.org/sg/en/content/sg/speeches/2023-07-18/secretary-generals-remarks-the-security-council-artificial-intelligence) [emarks-the-security-council-artificial-intelligence.](https://www.un.org/sg/en/content/sg/speeches/2023-07-18/secretary-generals-remarks-the-security-council-artificial-intelligence) [294] Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. 2017a. The off-switch game. In _Workshops at the Thirty-First AAAI Conference on Artificial Intelligence_ . [295] Dylan Hadfield-Menell and Gillian K Hadfield. 2019. Incomplete contracting and ai alignment. In _Proceed-_ _ings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 417–422. [296] Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart J Russell, and Anca Dragan. 2017b. Inverse reward design. _Advances in Neural Information Processing Systems_, 30. [297] Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. 2016. Cooperative inverse reinforcement learning. _Advances in Neural Information Processing Systems_, 29. [298] Thilo Hagendorff. 2020. The ethics of ai ethics: An evaluation of guidelines. _Minds and Machines_, 30(1):99– 120. [299] Thilo Hagendorff. 2022. A virtue-based framework to support putting ai ethics into practice. _Philosophy &_ _Technology_, 35(3):55. [300] Michael Hanna, Ollie Liu, and Alexandre Variengien. 2024. How does gpt-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. _Advances in Neural Information Processing_ _Systems_, 36. 78 References [301] Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2021. Self-attention attribution: Interpreting information interactions inside transformer. _Proceedings of the AAAI Conference on Artificial Intelligence_, 35(14):12963–12971. [302] Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In _Pro-_ _ceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 3309–3326. [303] Trevor Hastie, Robert Tibshirani, Jerome Friedman, Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009. Overview of supervised learning. _The elements of statistical learning: Data mining, inference, and_ _prediction_, pages 9–41. [[304] Jerry Zhi-Yang He and Anca D. Dragan. 2021. Assisted robust reward design. In](https://proceedings.mlr.press/v164/he22a.html) _Conference on Robot_ _Learning, 8-11 November 2021, London, UK_, volume 164 of _Proceedings of Machine Learning Research_, pages 1234–1246. PMLR. [305] Luxi He, Mengzhou Xia, and Peter Henderson. 2024. What’s in your" safe" data?: Identifying benign data that breaks safety. _arXiv preprint arXiv:2404.01099_ . [306] Stefan Heimersheim and Janiak Jett. 2023. A circuit for Python docstrings in a 4-layer attention-only trans[former. https://www.alignmentforum.org/posts/u6KXXmKFbXfWzoAXn/a-circuit-for](https://www.alignmentforum.org/posts/u6KXXmKFbXfWzoAXn/a-circuit-for-python-docstrings-in-a-4-layer-attention-only) [-python-docstrings-in-a-4-layer-attention-only.](https://www.alignmentforum.org/posts/u6KXXmKFbXfWzoAXn/a-circuit-for-python-docstrings-in-a-4-layer-attention-only) [307] Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W. Bradley Knox, and Dorsa Sadigh. 2024. Contrastive preference learning: Learning from human feedback without reinforcement learning. In _The Twelfth International Conference on Learning Representations_ . [308] Donald Joseph Hejna III and Dorsa Sadigh. 2022. Few-Shot Preference Learning for Human-in-the-Loop RL. In _Conference on Robot Learning (CoRL)_, pages 2014–2025. [[309] Dan Hendrycks. 2022. Pragmatic ai safety. https://www.alignmentforum.org/s/FaEBwhhe3](https://www.alignmentforum.org/s/FaEBwhhe3otzYKGQt) [otzYKGQt.](https://www.alignmentforum.org/s/FaEBwhhe3otzYKGQt) [[310] Dan Hendrycks. 2023. Natural selection favors ais over humans.](http://arxiv.org/abs/2303.16200) [311] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. 2021a. The many faces of robustness: A critical analysis of outof-distribution generalization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 8340–8349. [312] Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2020. Aligning ai with shared human values. In _International Conference on Learning Representations_ . [313] Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. 2021b. Unsolved problems in ml safety. _arXiv preprint arXiv:2109.13916_ . [314] Dan Hendrycks and Thomas Dietterich. 2018. Benchmarking neural network robustness to common corruptions and perturbations. In _International Conference on Learning Representations_ . [315] Dan Hendrycks and Kevin Gimpel. 2016. [Gaussian error linear units (gelus).](https://arxiv.org/abs/1606.08415) _arXiv preprint_ _arXiv:1606.08415_ . [316] Dan Hendrycks and Mantas Mazeika. 2022. X-risk analysis for ai research. _arXiv preprint_ _arXiv:2206.05862_ . [317] Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. 2019. Using self-supervised learning can improve model robustness and uncertainty. _Advances in Neural Information Processing Systems_, 32. [318] Dan Hendrycks, Mantas Mazeika, and Thomas Woodside. 2023. An overview of catastrophic ai risks. _arXiv_ _preprint arXiv:2306.12001_ . [319] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. 2021c. Natural adversarial examples. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15262–15271. [320] Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. _Advances in Neural_ _Information Processing Systems_, 29. 79 References [321] Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, et al. 2023. International institutions for advanced ai. _arXiv preprint arXiv:2307.04699_ . [[322] Marius Hobbhahn. 2022. Eliciting latent knowledge (elk) - distillation/summary. https://www.alignm](https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary) [entforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-disti](https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary) [llation-summary.](https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary) [323] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. An empirical analysis of compute-optimal large language model training. _Advances in Neural Information Processing Systems_, 35:30016–30030. [324] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In _International Conference on Learning Representations_ . [325] Andreas Holzinger. 2016. Interactive machine learning for health informatics: when do we need the humanin-the-loop? _Brain Informatics_, 3(2):119–131. [326] Andreas Holzinger, Chris Biemann, Constantinos S Pattichis, and Douglas B Kell. 2017. What do we need to build explainable ai systems for the medical domain? _arXiv preprint arXiv:1712.09923_ . [327] Joey Hong, Kush Bhatia, and Anca Dragan. 2022. On the sensitivity of reward inference to misspecified human models. In _The Eleventh International Conference on Learning Representations_ . [328] Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. Q2:: Evaluating factual consistency in knowledge-grounded dialogues via question generation and question answering. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 7856–7870. [[329] Jeremy Howard. 2023. Ai safety and the age of dislightenment. https://www.fast.ai/posts/20](https://www.fast.ai/posts/2023-11-07-dislightenment.html) [23-11-07-dislightenment.html.](https://www.fast.ai/posts/2023-11-07-dislightenment.html) [330] Hengyuan Hu, Adam Lerer, Brandon Cui, Luis Pineda, Noam Brown, and Jakob Foerster. 2021. Off-belief learning. In _International Conference on Machine Learning_, pages 4369–4379. PMLR. [331] Hengyuan Hu, Adam Lerer, Alex Peysakhovich, and Jakob Foerster. 2020. “other-play” for zero-shot coordination. In _International Conference on Machine Learning_, pages 4399–4410. PMLR. [[332] Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, and Zhijiang Guo. 2024. Do](https://openreview.net/forum?id=9OevMUdods) [large language models know about facts? In](https://openreview.net/forum?id=9OevMUdods) _The Twelfth International Conference on Learning Representations_ . [[333] Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial](https://openreview.net/forum?id=ryvlRyBKl) [attacks on neural network policies. In](https://openreview.net/forum?id=ryvlRyBKl) _5th International Conference on Learning Representations, ICLR 2017,_ _Toulon, France, April 24-26, 2017, Workshop Track Proceedings_ . OpenReview.net. [334] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. 2023. Inner monologue: Embodied reasoning through planning with language models. In _Conference on Robot Learning_, pages 1769–1782. PMLR. [335] Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, and Danqi Chen. 2024. Catastrophic jailbreak of open-source LLMs via exploiting generation. In _The Twelfth International Conference on Learning Represen-_ _tations_ . [336] Evan Hubinger. 2020. An overview of 11 proposals for building safe advanced ai. _arXiv preprint_ _arXiv:2012.07532_ . [337] Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M Ziegler, Tim Maxwell, Newton Cheng, et al. 2024. Sleeper agents: Training deceptive llms that persist through safety training. _arXiv preprint arXiv:2401.05566_ . [338] Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. 2019a. Deceptive [alignment. https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-a](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) [lignment.](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) [339] Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. 2019b. The inner [alignment problem. https://www.alignmentforum.org/posts/pL56xPoniLvtMDQ4J/the-i](https://www.alignmentforum.org/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem) [nner-alignment-problem.](https://www.alignmentforum.org/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem) 80 References [340] Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. 2019c. Risks from learned optimization in advanced machine learning systems. _arXiv preprint arXiv:1906.01820_ . [341] Drew Arad Hudson and Christopher D. Manning. 2018. Compositional attention networks for machine reasoning. In _International Conference on Learning Representations_ . [342] Eyke Hüllermeier, Johannes Fürnkranz, Weiwei Cheng, and Klaus Brinker. 2008. Label ranking by learning pairwise preferences. _Artificial Intelligence_, 172(16-17):1897–1916. [343] Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. 2017. Imitation learning: A survey of learning methods. _ACM Computing Surveys (CSUR)_, 50(2):1–35. [344] Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. 2018. Reward learning from human preferences and demonstrations in atari. _Advances in Neural Information Processing_ _Systems_, 31. [345] Geoffrey Irving, Paul Christiano, and Dario Amodei. 2018. Ai safety via debate. _arXiv preprint_ _arXiv:1805.00899_ . [346] Charles Isbell, Christian R Shelton, Michael Kearns, Satinder Singh, and Peter Stone. 2001. A social reinforcement learning agent. In _Proceedings of the fifth international conference on Autonomous agents_, pages 377–384. [[347] Jacob Steinhardt. 2023. Emergent deception and emergent optimization. https://bounded-regret.](https://bounded-regret.ghost.io/emergent-deception-optimization) [ghost.io/emergent-deception-optimization.](https://bounded-regret.ghost.io/emergent-deception-optimization) [348] Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. 2019. Human-level performance in 3d multiplayer games with population-based reinforcement learning. _Science_, 364(6443):859–865. [349] Ashesh Jain, Brian Wojcik, Thorsten Joachims, and Ashutosh Saxena. 2013. Learning trajectory preferences for manipulators via iterative improvement. _Advances in Neural Information Processing Systems_, 26. [350] Hong Jun Jeon, Smitha Milli, and Anca Dragan. 2020. Reward-rational (implicit) choice: A unifying formalism for reward learning. _Advances in Neural Information Processing Systems_, 33:4415–4426. [351] Jiaming Ji, Boyuan Chen, Hantao Lou, Donghai Hong, Borong Zhang, Xuehai Pan, Juntao Dai, and Yaodong Yang. 2024a. Aligner: Achieving efficient alignment through weak-to-strong correction. _arXiv preprint_ _arXiv:2402.02416_ . [352] Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2024b. Beavertails: Towards improved safety alignment of llm via a humanpreference dataset. _Advances in Neural Information Processing Systems_, 36. [353] Jiaming Ji, Kaile Wang, Tianyi Qiu, Boyuan Chen, JiayiZhou ChangyeLi HantaoLou YaodongYang, and PKU-Alignment Team. 2024c. Language models resist alignment. _arXiv preprint arXiv:2406.06144_ . [354] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. _ACM Computing_ _Surveys (CSUR)_, 55(12):1–38. [355] Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_, pages 2021–2031. [356] Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob Foerster, Edward Grefenstette, and Tim Rocktäschel. 2021. Replay-guided adversarial environment design. _Advances in Neural Information Processing_ _Systems_, 34:1884–1897. [357] Zhijing Jin, Sydney Levine, Fernando Gonzalez Adauto, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, and Bernhard Schölkopf. 2022a. When to make exceptions: Exploring language models as accounts of human moral judgment. _Advances in Neural Information Processing Systems_, 35:28458– 28473. [358] Zhijing Jin, Sydney Levine, Fernando Gonzalez Adauto, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, [Rada Mihalcea, Josh Tenenbaum, and Bernhard Schölkopf. 2022b. When to make exceptions: Exploring lan-](https://proceedings.neurips.cc/paper_files/paper/2022/file/b654d6150630a5ba5df7a55621390daf-Paper-Conference.pdf) [guage models as accounts of human moral judgment. In](https://proceedings.neurips.cc/paper_files/paper/2022/file/b654d6150630a5ba5df7a55621390daf-Paper-Conference.pdf) _Advances in Neural Information Processing Systems_, volume 35, pages 28458–28473. [[359] Jonas DeGrave. 2022. Building a virtual machine inside chatgpt. https://www.engraved.blog/bu](https://www.engraved.blog/building-a-virtual-machine-inside) [ilding-a-virtual-machine-inside.](https://www.engraved.blog/building-a-virtual-machine-inside) 81 References [360] Erik Jones, Anca D. Dragan, Aditi Raghunathan, and Jacob Steinhardt. 2023. Automatically auditing large language models via discrete optimization. In _International Conference on Machine Learning, ICML 2023,_ _23-29 July 2023, Honolulu, Hawaii, USA_, volume 202 of _Proceedings of Machine Learning Research_, pages 15307–15329. PMLR. [361] Jeevesh Juneja, Rachit Bansal, Kyunghyun Cho, João Sedoc, and Naomi Saphra. 2022. Linear connectivity reveals generalization strategies. In _The Eleventh International Conference on Learning Representations_ . [362] Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. 1996. Reinforcement learning: A survey. _Journal of artificial intelligence research_, 4:237–285. [363] Dimitris Kalimeris, Smriti Bhagat, Shankar Kalyanaraman, and Udi Weinsberg. 2021. Preference amplification in recommender systems. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery_ _& Data Mining_, pages 805–815. [364] Josh Kalin, Matthew Ciolino, David Noever, and Gerry Dozier. 2020. Black box to white box: Discover model characteristics based on strategic probing. In _2020 Third International Conference on Artificial Intelli-_ _gence for Industries (AI4I)_, pages 60–63. IEEE. [365] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. _arXiv preprint_ _arXiv:2001.08361_ . [366] Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. _arXiv preprint arXiv:1506.02078_ . [367] Michael Kearns and Aaron Roth. 2019. _The ethical algorithm: The science of socially aware algorithm_ _design_ . Oxford University Press. [368] Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_, pages 4171–4186. [369] Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. 2021. Alignment of language agents. _arXiv preprint arXiv:2103.14659_ . [370] Zachary Kenton, Rohin Shah, David Lindner, Vikrant Varma, Victoria Krakovna, Mary Phuong, Ramana [Kumar, and Elliot Catt. 2022. Threat model literature review. https://www.alignmentforum.org/p](https://www.alignmentforum.org/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review) [osts/wnnkD6P2k2TfHnNmt/threat-model-literature-review.](https://www.alignmentforum.org/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review) [371] Zachary Kenton, Noah Y Siegel, János Kramár, Jonah Brown-Cohen, Samuel Albanie, Jannis Bulian, Rishabh Agarwal, David Lindner, Yunhao Tang, Noah D Goodman, et al. 2024. On scalable oversight with weak llms judging strong llms. _arXiv preprint arXiv:2407.04622_ . [372] Ben Kenward and Thomas Sinclair. 2021. Machine morality, moral progress, and the looming environmental disaster. _Cognitive Computation and Systems_, 3(2):83–90. [373] Cameron F Kerry, Joshua P Meltzer, Andrea Renda, Alex Engler, and Rosanna Fanni. 2021. Strengthening [international cooperation on ai, progress report. https://www.brookings.edu/articles/strengt](https://www.brookings.edu/articles/strengthening-international-cooperation-on-ai) [hening-international-cooperation-on-ai.](https://www.brookings.edu/articles/strengthening-international-cooperation-on-ai) [374] Akbir Khan, John Hughes, Dan Valentine, Laura Ruis, Kshitij Sachan, Ansh Radhakrishnan, Edward Grefenstette, Samuel R Bowman, Tim Rocktäschel, and Ethan Perez. 2024. Debating with more persuasive llms leads to more truthful answers. _arXiv preprint arXiv:2402.06782_ . [375] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. 2018. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In _Interna-_ _tional conference on machine learning_, pages 2668–2677. PMLR. [376] Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. 2023. Preference Transformer: Modeling Human Preferences using Transformers for RL. In _International Conference on_ _Learning Representations (ICLR)_ . [377] Geunwoo Kim, Pierre Baldi, and Stephen McAleer. 2024. Language models can solve computer tasks. _Advances in Neural Information Processing Systems_, 36. [378] Kuno Kim, Shivam Garg, Kirankumar Shiragur, and Stefano Ermon. 2021. Reward identification in inverse reinforcement learning. In _International Conference on Machine Learning_, pages 5496–5505. PMLR. 82 References [379] Pieter-Jan Kindermans, Kristof T Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. 2018. Learning how to explain neural networks: Patternnet and patternattribution. In _International Conference on Learning Representations_ . [380] Megan Kinniment, Lucas Jun Koba Sato, Haoxing Du, Brian Goodrich, Max Hasin, Lawrence Chan, Luke Harold Miles, Tao R Lin, Hjalmar Wijk, Joel Burget, Aaron Ho, Elizabeth Barnes, and Paul Christiano. [2023. Evaluating language-model agents on realistic autonomous tasks. https://evals.alignment.](https://evals.alignment.org/language-model-pilot-report) [org/language-model-pilot-report.](https://evals.alignment.org/language-model-pilot-report) [381] Andrei Kirilenko, Albert S Kyle, Mehrdad Samadi, and Tugkan Tuzun. 2017. The flash crash: Highfrequency trading in an electronic market. _The Journal of Finance_, 72(3):967–998. [382] Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. _arXiv preprint arXiv:1805.04508_ . [383] Hannah Rose Kirk, Alexander Whitefield, Paul Röttger, Andrew Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, He He, et al. 2024. The prism alignment project: What participatory, representative and individualised human feedback reveals about the subjective and multicultural alignment of large language models. _arXiv preprint arXiv:2404.16019_ . [384] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. _Advances in Neural Information Processing Systems_, 30. [385] Toryn Q Klassen, Sheila A McIlraith, Christian Muise, and Jarvis Xu. 2022. Planning to avoid side effects. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36(9), pages 9830–9839. [386] Franziska Klügl, Manuel Fehler, and Rainer Herrler. 2005. About the role of the environment in multi-agent simulations. In _Environments for Multi-Agent Systems: First International Workshop, E4MAS 2004, New York,_ _NY, July 19, 2004, Revised Selected Papers 1_, pages 127–149. Springer. [387] W Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, and Peter Stone. 2023. Reward (mis) design for autonomous driving. _Artificial Intelligence_, 316:103829. [388] W Bradley Knox and Peter Stone. 2008. Tamer: Training an agent manually via evaluative reinforcement. In _2008 7th IEEE international conference on development and learning_, pages 292–297. IEEE. [389] W Bradley Knox and Peter Stone. 2012. Reinforcement learning from simultaneous human and mdp reward. In _AAMAS_, volume 1004, pages 475–482. Valencia. [390] W Bradley Knox and Peter Stone. 2013. Learning non-myopically from human-generated reward. In _Pro-_ _ceedings of the 2013 international conference on Intelligent user interfaces_, pages 191–202. [391] W Bradley Knox, Peter Stone, and Cynthia Breazeal. 2013. Training a robot via human feedback: A case study. In _Social Robotics: 5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Pro-_ _ceedings 5_, pages 460–470. Springer. [392] William Bradley Knox. 2012. Learning from human-generated reward. [393] Leonie Koessler and Jonas Schuett. 2023. Risk assessment at agi companies: A review of popular risk assessment techniques from other safety-critical industries. _arXiv preprint arXiv:2307.08823_ . [394] Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In _International conference on machine learning_, pages 1885–1894. PMLR. [395] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richárd Nagyfi, et al. 2024. Openassistant conversationsdemocratizing large language model alignment. _Advances in Neural Information Processing Systems_, 36. [396] Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Vinayak Bhalerao, Christopher Buckley, Jason Phang, Samuel R Bowman, and Ethan Perez. 2023. Pretraining language models with human preferences. In _Interna-_ _tional Conference on Machine Learning_, pages 17506–17533. PMLR. [397] Peter Krafft, Chris Baker, Alex Pentland, and Joshua Tenenbaum. 2016. Modeling human ad hoc coordination. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 30–(1). [[398] Victoria Krakovna. 2020. More instances about specification gaming. https://docs.google.com/](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) [spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTi](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) [RRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml.](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) 83 References [[399] Victoria Krakovna. 2022. Paradigms of ai alignment: components and enablers. https://vkrakovna.](https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers) [wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enabler](https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers) [s.](https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers) [400] Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. Gedi: Generative discriminator guided sequence generation. In _Findings of_ _the Association for Computational Linguistics: EMNLP 2021_, pages 4929–4952. [401] Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. 2022. The disagreement problem in explainable machine learning: A practitioner’s perspective. _arXiv preprint arXiv:2202.01602_ . [402] Maya Krishnan. 2020. Against interpretability: a critical examination of the interpretability problem in machine learning. _Philosophy & Technology_, 33(3):487–502. [403] David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. 2021. Out-of-distribution generalization via risk extrapolation (rex). In _International Conference on Machine Learning_, pages 5815–5826. PMLR. [404] David Krueger, Tegan Maharaj, and Jan Leike. 2020. Hidden incentives for auto-induced distributional shift. _arXiv preprint arXiv:2009.09153_ . [405] Andras Kupcsik, Marc Deisenroth, Jan Peters, and Gerhard Neumann. 2013. Data-efficient generalization of robot skills with contextual policy search. In _Proceedings of the AAAI conference on artificial intelligence_, volume 27(1), pages 1401–1407. [406] Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2019. An evaluation of the human-interpretability of explanation. _arXiv preprint arXiv:1902.00006_ . [407] Isaac Lage, Andrew Ross, Samuel J Gershman, Been Kim, and Finale Doshi-Velez. 2018. Human-in-theloop interpretability prior. _Advances in Neural Information Processing Systems_, 31. [408] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. 2017. Building machines that learn and think like people. _Behavioral and brain sciences_, 40:e253. [409] Richard N Landers and Tara S Behrend. 2023. Auditing the ai auditors: A framework for evaluating fairness and bias in high stakes ai predictive models. _American Psychologist_, 78(1):36. [410] Leon Lang, Davis Foote, Stuart Russell, Anca Dragan, Erik Jenner, and Scott Emmons. 2024. When your ai deceives you: Challenges with partial observability of human evaluators in reward learning. _arXiv preprint_ _arXiv:2402.17747_ . [411] Chan Lawrence, Adria Garriga-alonso, Nicholas Goldowsky-Dill, Ryan Greenblatt, Jenny Nitishinskaya, Ansh Radhakrishnan, Buck Shlegeris, and Thomas Nate. 2023. Causal Scrubbing: a method for rigorously [testing interpretability hypotheses [Redwood Research]. https://www.alignmentforum.org/posts](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing) [/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing.](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing) [412] Yann LeCun. 2022. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. _Open_ _Review_, 62. [413] Deokjae Lee, Seungyong Moon, Junhyeok Lee, and Hyun Oh Song. 2022. Query-efficient and scalable black-box adversarial attacks on discrete sequential data via bayesian optimization. In _International Conference_ _on Machine Learning_, pages 12478–12497. PMLR. [414] Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. 2023a. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. _arXiv preprint arXiv:2309.00267_ . [415] Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive visualization and manipulation of attention-based neural machine translation. In _Proceedings of the 2017 Conference on Empirical Methods in_ _Natural Language Processing: System Demonstrations_, pages 121–126. [416] Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, and Ruslan Salakhutdinov. 2019. Efficient exploration via state marginal matching. _arXiv preprint arXiv:1906.05274_ . [417] Sunok Lee, Minha Lee, and Sangsu Lee. 2023b. What if artificial intelligence become completely ambient in our daily lives? exploring future human-ai interaction through high fidelity illustrations. _International Journal_ _of Human–Computer Interaction_, 39(7):1371–1389. 84 References [418] Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, et al. 2020. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. _Artificial_ _life_, 26(2):274–306. [419] Joel Lehman, Kenneth O Stanley, et al. 2008. Exploiting open-endedness to solve problems through the search for novelty. In _ALIFE_, pages 329–336. [420] Joel Z Leibo, Edgar A Dueñez-Guzman, Alexander Vezhnevets, John P Agapiou, Peter Sunehag, Raphael Koster, Jayd Matyas, Charlie Beattie, Igor Mordatch, and Thore Graepel. 2021. Scalable evaluation of multiagent reinforcement learning with melting pot. In _International conference on machine learning_, pages 6187– 6199. PMLR. [[421] Jan Leike. 2022. A proposal for improving societys values.](https://aligned.substack.com/p/a-proposal-for-importing-societys-values) [[422] Jan Leike. 2023a. Combining weak-to-strong generalization with scalable oversight. https://aligne](https://aligned.substack.com/p/combining-w2sg-with-scalable-oversight?utm_source=post-email-title&publication_id=328633&post_id=139945470&utm_campaign=email-post-title&isFreemail=true&r=2xbqf0) [d.substack.com/p/combining-w2sg-with-scalable-oversight?utm_source=post-e](https://aligned.substack.com/p/combining-w2sg-with-scalable-oversight?utm_source=post-email-title&publication_id=328633&post_id=139945470&utm_campaign=email-post-title&isFreemail=true&r=2xbqf0) [mail-title&publication_id=328633&post_id=139945470&utm_campaign=email-pos](https://aligned.substack.com/p/combining-w2sg-with-scalable-oversight?utm_source=post-email-title&publication_id=328633&post_id=139945470&utm_campaign=email-post-title&isFreemail=true&r=2xbqf0) [t-title&isFreemail=true&r=2xbqf0.](https://aligned.substack.com/p/combining-w2sg-with-scalable-oversight?utm_source=post-email-title&publication_id=328633&post_id=139945470&utm_campaign=email-post-title&isFreemail=true&r=2xbqf0) [[423] Jan Leike. 2023b. A proposal for importing society’s values. https://aligned.substack.com/p](https://aligned.substack.com/p/a-proposal-for-importing-societys-values) [/a-proposal-for-importing-societys-values.](https://aligned.substack.com/p/a-proposal-for-importing-societys-values) [424] Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. Scalable agent alignment via reward modeling: a research direction. _arXiv preprint arXiv:1811.07871_ . [425] Filippa Lentzos. 2022. Ai and biological weapons. In _Armament, Arms Control and Artificial Intelligence:_ _The Janus-faced Nature of Machine Learning in the Military Realm_, pages 91–100. Springer. [426] Belinda Z Li, Maxwell Nye, and Jacob Andreas. 2021a. Implicit representations of meaning in neural language models. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics_ _and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 1813–1827. [427] Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2023a. Trustworthy ai: From principles to practices. _ACM Computing Surveys (CSUR)_, 55(9):1–46. [428] Chao Li, Kelu Yao, Jin Wang, Boyu Diao, Yongjun Xu, and Quanshi Zhang. 2022a. Interpretable generative adversarial networks. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36(2), pages 1280–1288. [429] Jiawei Li, Yiming Li, Xingchun Xiang, Shu-Tao Xia, Siyi Dong, and Yun Cai. 2020. Tnt: An interpretable tree-network-tree learning framework using knowledge distillation. _Entropy_, 22(11):1203. [430] Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2022b. Emergent world representations: Exploring a sequence model trained on a synthetic task. In _The_ _Eleventh International Conference on Learning Representations_ . [431] Mengxi Li, Alper Canberk, Dylan P Losey, and Dorsa Sadigh. 2021b. Learning human objectives from sequences of physical corrections. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_, pages 2877–2883. IEEE. [432] Tao Li and Suresh P Sethi. 2017. A review of dynamic stackelberg game models. _Discrete & Continuous_ _Dynamical Systems-B_, 22(1):125. [433] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Xin Zhao, and Ji-Rong Wen. 2023b. Evaluating object hallucination in large vision-language models. In _Proceedings of the 2023 Conference on Empirical Methods in_ _Natural Language Processing_, pages 292–305. [434] Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In _International Conference on Machine Learning_, pages 6565–6576. PMLR. [435] Tom Lieberum, Matthew Rahtz, János Kramár, Neel Nanda, Geoffrey Irving, Rohin Shah, and Vladimir [Mikulik. 2023. Does circuit analysis interpretability scale? evidence from multiple choice capabilities in chin-](http://arxiv.org/abs/2307.09458) [chilla.](http://arxiv.org/abs/2307.09458) [436] Wim BG Liebrand. 1984. The effect of social motives, communication and group size on behaviour in an n-person multi-stage mixed-motive game. _European journal of social psychology_, 14(3):239–264. 85 References [437] Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural_ _Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-_ _IJCNLP)_, pages 2829–2839. [438] Fengming Lin, Xiaolei Fang, and Zheming Gao. 2022a. Distributionally robust optimization: A review on theory and applications. _Numerical Algebra, Control and Optimization_, 12(1):159–212. [439] Jessy Lin, Daniel Fried, Dan Klein, and Anca Dragan. 2022b. Inferring rewards from language in context. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long_ _Papers)_, pages 8546–8560. [440] Stephanie Lin, Jacob Hilton, and Owain Evans. 2022c. Truthfulqa: Measuring how models mimic human falsehoods. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol-_ _ume 1: Long Papers)_, pages 3214–3252. [441] Tsung-Han Lin and Ping Tak Peter Tang. 2019. Sparse dictionary learning by dynamical neural networks. In _International Conference on Learning Representations_ . [442] Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. In _Proceedings of the 5th Workshop on NLP for Con-_ _versational AI (NLP4ConvAI 2023)_, pages 47–58. [443] Zachary C Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. _Queue_, 16(3):31–57. [444] Qin Liu, Meng Zheng, Benjamin Planche, Srikrishna Karanam, Terrence Chen, Marc Niethammer, and Ziyan Wu. 2022. Pseudoclick: Interactive Image Segmentation with Click Imitation. In _European Conference on_ _Computer Vision (ECCV)_ . [445] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Diyi Yang, and Soroush Vosoughi. 2024a. Training socially aligned language models on simulated social interactions. In _The Twelfth International Conference on Learning_ _Representations_ . [446] Shusen Liu, Tao Li, Zhimin Li, Vivek Srikumar, Valerio Pascucci, and Peer-Timo Bremer. 2018. Visual interrogation of attention-based models for natural language inference and machine comprehension. Technical report, Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States). [447] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2024b. Agentbench: Evaluating LLMs as agents. In _The Twelfth International Conference on Learning Representations_ . [448] Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020. Adversarial training for large neural language models. _arXiv preprint arXiv:2004.08994_ . [449] Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, and Yang Liu. 2023. Jailbreaking chatgpt via prompt engineering: An empirical study. _arXiv preprint_ _arXiv:2305.13860_ . [450] Robert Loftin, Bei Peng, James MacGlashan, Michael L Littman, Matthew E Taylor, Jeff Huang, and David L Roberts. 2016. Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning. _Autonomous Agents and Multi-Agent Systems_, 30:30–59. [451] Dylan P Losey, Andrea Bajcsy, Marcia K O’Malley, and Anca D Dragan. 2022. Physical interaction as communication: Learning robot objectives online from human corrections. _The International Journal of Robotics_ _Research_, 41(1):20–44. [452] Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. 2017. Multiagent actor-critic for mixed cooperative-competitive environments. _Advances in Neural Information Processing_ _Systems_, 30. [453] Ekdeep Singh Lubana, Eric J Bigelow, Robert P Dick, David Krueger, and Hidenori Tanaka. 2023. Mechanistic mode connectivity. In _International Conference on Machine Learning_, pages 22965–23004. PMLR. [454] Daniel D Lundstrom, Tianjian Huang, and Meisam Razaviyayn. 2022. A rigorous study of integrated gradients method and extensions to internal neuron attributions. In _International Conference on Machine Learning_, pages 14485–14508. PMLR. 86 References [455] Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, and Pierre Sermanet. 2020. Learning latent plans from play. In _Conference on robot learning_, pages 1113–1132. PMLR. [456] Corey Lynch, Ayzaan Wahid, Jonathan Tompson, Tianli Ding, James Betker, Robert Baruch, Travis Armstrong, and Pete Florence. 2023. Interactive language: Talking to robots in real time. _IEEE Robotics and_ _Automation Letters_ . [457] Weiqin Ma, Pu Duan, Sanmin Liu, Guofei Gu, and Jyh-Charn Liu. 2012. Shadow attacks: automatically evading system-call-behavior based malware detection. _Journal in Computer Virology_, 8:1–13. [458] Zixian Ma, Rose Wang, Fei-Fei Li, Michael Bernstein, and Ranjay Krishna. 2022. Elign: Expectation alignment as a multi-agent intrinsic reward. _Advances in Neural Information Processing Systems_, 35:8304–8317. [459] Matthijs M Maas. 2021. Aligning ai regulation to sociotechnical change. _Oxford Handbook on AI Gover-_ _nance (Oxford University Press, 2022 forthcoming)_ . [460] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. _Journal of machine_ _learning research_, 9(11). [461] James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L Roberts, Matthew E Taylor, and Michael L Littman. 2017. Interactive learning from policy-dependent human feedback. In _International_ _conference on machine learning_, pages 2285–2294. PMLR. [462] Alasdair MacIntyre. 2013. _After virtue_ . A&C Black. [463] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In _International Conference on Learning Represen-_ _tations_ . [464] Andreas Madsen, Siva Reddy, and Sarath Chandar. 2022. Post-hoc interpretability for neural nlp: A survey. _ACM Computing Surveys (CSUR)_, 55(8):1–42. [[465] MAI. 2023. Introducing democratic fine-tuning.](https://www.meaningalignment.org/research/introducing-democratic-fine-tuning) [466] Daniel J Mankowitz, Andrea Michi, Anton Zhernov, Marco Gelmi, Marco Selvi, Cosmin Paduraru, Edouard Leurent, Shariq Iqbal, Jean-Baptiste Lespiau, Alex Ahern, et al. 2023. Faster sorting algorithms discovered using deep reinforcement learning. _Nature_, 618(7964):257–263. [467] Aaron Mannes. 2020. Governance, risk, and artificial intelligence. _AI Magazine_, 41(1):61–69. [468] James Manyika, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George, Paul Willmott, and Martin Dewhurst. 2017. A future that works: Ai, automation, employment, and productivity. _McKinsey Global Institute_ _Research, Tech. Rep_, 60:1–135. [469] Xiaofeng Mao, Yuefeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, Xiaodan Li, Rong Zhang, Hui Xue, et al. 2022. Enhance the visual representation via discrete adversarial training. _Advances in Neural Information_ _Processing Systems_, 35:7520–7533. [470] Peter Marbach and John N Tsitsiklis. 2001. Simulation-based optimization of markov reward processes. _IEEE Transactions on Automatic Control_, 46(2):191–209. [[471] Gary Marcus. 2018. Deep learning: A critical appraisal.](http://arxiv.org/abs/1801.00631) [472] David Mascharka, Philip Tran, Ryan Soklaski, and Arjun Majumdar. 2018. Transparency by design: Closing the gap between performance and interpretability in visual reasoning. In _Proceedings of the IEEE conference_ _on computer vision and pattern recognition_, pages 4942–4950. [473] Charles G McClintock and Eddy Van Avermaet. 1982. Social values and rules of fairness: A theoretical perspective. _Cooperation and helping behavior_, pages 43–71. [474] Thomas McGrath, Matthew Rahtz, Janos Kramar, Vladimir Mikulik, and Shane Legg. 2023. The hydra effect: Emergent self-repair in language model computations. _arXiv preprint arXiv:2307.15771_ . [475] Kevin R McKee, Ian Gemp, Brian McWilliams, Edgar A Duèñez-Guzmán, Edward Hughes, and Joel Z Leibo. 2020. Social diversity and social preferences in mixed-motive reinforcement learning. In _Proceedings of_ _the 19th International Conference on Autonomous Agents and MultiAgent Systems_, pages 869–877. [476] Lev E McKinney, Yawen Duan, David Krueger, and Adam Gleave. 2022. On the fragility of learned reward functions. In _NeurIPS ML Safety Workshop_ . 87 References [477] Scott McLean, Gemma JM Read, Jason Thompson, Chris Baber, Neville A Stanton, and Paul M Salmon. 2023. The risks associated with artificial general intelligence: A systematic review. _Journal of Experimental &_ _Theoretical Artificial Intelligence_, 35(5):649–663. [478] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. _ACM Computing Surveys (CSUR)_, 54(6):1–35. [479] Bahar Memarian and Tenzin Doleck. 2023. Fairness, accountability, transparency, and ethics (fate) in artificial intelligence (ai), and higher education: A systematic review. _Computers and Education: Artificial Intelli-_ _gence_, page 100152. [480] Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual associations in gpt. _Advances in Neural Information Processing Systems_, 35:17359–17372. [481] Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. 2022b. Mass-editing memory in a transformer. In _The Eleventh International Conference on Learning Representations_ . [[482] Alex Mennen. 2018. A comment on the ida-alphagozero metaphor; capabilities versus alignment. https:](https://www.alignmentforum.org/posts/yXFKh2jGysQNfX2NM/a-comment-on-the-ida-alphagozero-metaphor-capabilities) [//www.alignmentforum.org/posts/yXFKh2jGysQNfX2NM/a-comment-on-the-ida-alp](https://www.alignmentforum.org/posts/yXFKh2jGysQNfX2NM/a-comment-on-the-ida-alphagozero-metaphor-capabilities) [hagozero-metaphor-capabilities.](https://www.alignmentforum.org/posts/yXFKh2jGysQNfX2NM/a-comment-on-the-ida-alphagozero-metaphor-capabilities) [483] Bruno Mermet and Gaële Simon. 2016. Formal verication of ethical properties in multiagent systems. In _1st_ _Workshop on Ethics in the Design of Intelligent Agents_ . [484] David M Messick and Charles G McClintock. 1968. Motivational bases of choice in experimental games. _Journal of experimental social psychology_, 4(1):1–25. [[485] Meta. 2023. Meta and microsoft introduce the next generation of llama. https://ai.meta.com/blog](https://ai.meta.com/blog/llama-2) [/llama-2.](https://ai.meta.com/blog/llama-2) [486] Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang, Jason Phang, et al. 2022. What do nlp researchers believe? results of the nlp community metasurvey. _arXiv preprint arXiv:2208.12852_ . [[487] Michaelcohen. 2020. the-ai-debate-debate. https://www.alignmentforum.org/posts/L3QDs](https://www.alignmentforum.org/posts/L3QDs6of4Rb2TgpRD/the-ai-debate-debate) [6of4Rb2TgpRD/the-ai-debate-debate.](https://www.alignmentforum.org/posts/L3QDs6of4Rb2TgpRD/the-ai-debate-debate) [488] Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. _Artificial Intelli-_ _gence_, 267:1–38. [489] Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2023. Recent advances in natural language processing via large pre-trained language models: A survey. _ACM Computing Surveys (CSUR)_, 56(2):1–40. [490] Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, and Yejin [Choi. 2024. Can LLMs keep a secret? testing privacy implications of language models via contextual integrity](https://openreview.net/forum?id=gmg7t8b4s0) [theory. In](https://openreview.net/forum?id=gmg7t8b4s0) _The Twelfth International Conference on Learning Representations_ . [491] Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Maura Pintor, Wenke Lee, Yuval Elovici, et al. 2023. The threat of offensive ai to organizations. _Computers & Security_, 124:103006. [492] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. _Nature_, 518(7540):529–533. [493] Thomas M Moerland, Joost Broekens, Aske Plaat, Catholijn M Jonker, et al. 2023. Model-based reinforcement learning: A survey. _Foundations and Trends® in Machine Learning_, 16(1):1–118. [494] Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable ai systems. _ACM Transactions on Interactive Intelligent Systems (TiiS)_, 11(3-4):1– 45. [495] Jakob Mökander, Jonas Schuett, Hannah Rose Kirk, and Luciano Floridi. 2023. Auditing large language models: a three-layered approach. _AI and Ethics_, pages 1–31. [496] Ari S Morcos, David GT Barrett, Neil C Rabinowitz, and Matthew Botvinick. 2018. On the importance of single directions for generalization. In _International Conference on Learning Representations_ . 88 References [497] Alexander Mordvintsev, Chris Olah, and Mike Tyka. 2015. Inceptionism: Going deeper into neural networks. _Google Research Blog_ . [[498] Emad Mostaque. 2022. Democratizing ai, stable diffusion & generative models. https://exchange.s](https://exchange.scale.com/public/videos/emad-mostaque-stability-ai-stable-diffusion-open-source) [cale.com/public/videos/emad-mostaque-stability-ai-stable-diffusion-open-s](https://exchange.scale.com/public/videos/emad-mostaque-stability-ai-stable-diffusion-open-source) [ource.](https://exchange.scale.com/public/videos/emad-mostaque-stability-ai-stable-diffusion-open-source) [499] Darius Muglich, Christian Schroeder de Witt, Elise van der Pol, Shimon Whiteson, and Jakob Foerster. 2022. Equivariant networks for zero-shot coordination. _Advances in Neural Information Processing Systems_, 35:6410–6423. [[500] Gabriel Mukobi. 2022. Iterated distillation-amplification, gato, and proto-agi. https://www.lesswron](https://www.lesswrong.com/posts/Evyk8eb6b7tFd6pxJ/iterated-distillation-amplification-gato-and-proto-agi-re) [g.com/posts/Evyk8eb6b7tFd6pxJ/iterated-distillation-amplification-gato-a](https://www.lesswrong.com/posts/Evyk8eb6b7tFd6pxJ/iterated-distillation-amplification-gato-and-proto-agi-re) [nd-proto-agi-re.](https://www.lesswrong.com/posts/Evyk8eb6b7tFd6pxJ/iterated-distillation-amplification-gato-and-proto-agi-re) [501] Arslan Munir, Alexander Aved, and Erik Blasch. 2022. Situational awareness: techniques, challenges, and prospects. _AI_, 3(1):55–77. [502] Kevin P. Murphy. 2023. _Probabilistic Machine Learning: Advanced Topics_ . MIT Press. [503] Ryan O Murphy and Kurt A Ackermann. 2014. Social value orientation: Theoretical and measurement issues in the study of social preferences. _Personality and Social Psychology Review_, 18(1):13–41. [504] Ryan O Murphy, Kurt A Ackermann, and Michel JJ Handgraaf. 2011. Measuring social value orientation. _Judgment and Decision making_, 6(8):771–781. [505] Grazia Murtarelli, Anne Gregory, and Stefania Romenti. 2021. A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots. _Journal of Business Research_, 129:927–935. [506] Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics_ _and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 5356–5371. [507] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. _arXiv preprint arXiv:2112.09332_ . [508] Neel Nanda. 2023a. Attribution patching: Activation patching at industrial scale. [[509] Neel Nanda. 2023b. Othello-gpt: Future work i am excited about. https://www.alignmentforum.o](https://www.alignmentforum.org/posts/qgK7smTvJ4DB8rZ6h/othello-gpt-future-work-i-am-excited-about) [rg/posts/qgK7smTvJ4DB8rZ6h/othello-gpt-future-work-i-am-excited-about.](https://www.alignmentforum.org/posts/qgK7smTvJ4DB8rZ6h/othello-gpt-future-work-i-am-excited-about) [510] Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. 2022. Progress measures for grokking via mechanistic interpretability. In _The Eleventh International Conference on Learning Representa-_ _tions_ . [511] Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In _Proceedings of the 2020 Conference on Empirical_ _Methods in Natural Language Processing (EMNLP)_, pages 1953–1967. [512] Ashutosh Nayyar, Aditya Mahajan, and Demosthenis Teneketzis. 2013. Decentralized stochastic control with partial history sharing: A common information approach. _IEEE Transactions on Automatic Control_, 58(7):1644–1658. [513] Michael Neely, Stefan F Schouten, Maurits JR Bleeker, and Ana Lucic. 2021. Order in the court: Explainable ai methods prone to disagreement. _arXiv preprint arXiv:2105.03287_ . [514] Andrew Y Ng, Daishi Harada, and Stuart Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In _Icml_, volume 99, pages 278–287. Citeseer. [515] Andrew Y Ng, Stuart Russell, et al. 2000. Algorithms for inverse reinforcement learning. In _Icml_, volume 1, page 2. [[516] Richard Ngo. 2020a. Agi safety from first principles. https://www.alignmentforum.org/s/mzg](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) [tmmTKKn5MuCzFJ.](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) [[517] Richard Ngo. 2020b. continuing-the-takeoffs-debate. https://www.alignmentforum.org/posts](https://www.alignmentforum.org/posts/Tpn2Fx9daLvj28kes/continuing-the-takeoffs-debate) [/Tpn2Fx9daLvj28kes/continuing-the-takeoffs-debate.](https://www.alignmentforum.org/posts/Tpn2Fx9daLvj28kes/continuing-the-takeoffs-debate) 89 References [[518] Richard Ngo. 2021. /why i m excited about debate. https://www.alignmentforum.org/posts](https://www.alignmentforum.org/posts/LDsSqXf9Dpu3J3gHD/why-i-m-excited-about-debate) [/LDsSqXf9Dpu3J3gHD/why-i-m-excited-about-debate.](https://www.alignmentforum.org/posts/LDsSqXf9Dpu3J3gHD/why-i-m-excited-about-debate) [519] Richard Ngo, Lawrence Chan, and Sören Mindermann. 2024. The alignment problem from a deep learning perspective: A position paper. In _The Twelfth International Conference on Learning Representations_ . [520] Anh Nguyen, Jason Yosinski, and Jeff Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. _Proceedings of the IEEE Conference on Computer Vision and Pattern_ _Recognition_ . [521] Anh Nguyen, Jason Yosinski, and Jeff Clune. 2016. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. _arXiv preprint arXiv:1602.03616_ . [522] Chi Nguyen. 2020. My understanding of paul christiano’s iterated amplification al safety research agenda. [https://www.alignmentforum.org/posts/PT8vSxsusqWuN7JXp/my-understanding-o](https://www.alignmentforum.org/posts/PT8vSxsusqWuN7JXp/my-understanding-of-paul-christiano-s-iterated-amplification#A_mathematical_way_of_solving_Go_is_impossible) [f-paul-christiano-s-iterated-amplification#A_mathematical_way_of_solving](https://www.alignmentforum.org/posts/PT8vSxsusqWuN7JXp/my-understanding-of-paul-christiano-s-iterated-amplification#A_mathematical_way_of_solving_Go_is_impossible) [_Go_is_impossible.](https://www.alignmentforum.org/posts/PT8vSxsusqWuN7JXp/my-understanding-of-paul-christiano-s-iterated-amplification#A_mathematical_way_of_solving_Go_is_impossible) [523] Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Ben Eysenbach. 2021. f-irl: Inverse reinforcement learning via state marginal matching. In _Conference on Robot Learning_, pages 529–551. PMLR. [524] Nassim Nicholas. 2008. The black swan: the impact of the highly improbable. _Journal of the Management_ _Training Institut_, 36(3):56. [525] Safiya Umoja Noble. 2018. Algorithms of oppression. In _Algorithms of oppression_ . New York university press. [526] Safiya Umoja Noble, Beatrice Dias, Sara Cole Stratton, Aimee van Wynsberghe, Carlos Affonso Souza, Ilene Carpenter, Alvaro Martin Enriquez, and Emily Ratté. 2021. Ai regulation through an intergenerational [lens. https://www3.weforum.org/docs/WEF_AI_Regulation_through_an_Intergenera](https://www3.weforum.org/docs/WEF_AI_Regulation_through_an_Intergenerational_Lens_2021.pdf) [tional_Lens_2021.pdf.](https://www3.weforum.org/docs/WEF_AI_Regulation_through_an_Intergenerational_Lens_2021.pdf) [527] Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia. 2018. A voting-based system for ethical decision making. _Proceedings of the AAAI_ _Conference on Artificial Intelligence_, 32(1). [528] Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, et al. 2020. Bias in datadriven artificial intelligence systems—an introductory survey. _Wiley Interdisciplinary Reviews: Data Mining_ _and Knowledge Discovery_, 10(3):e1356. [[529] OECD. 2019. Oecd principles on artificial intelligence. https://oecd.ai/en/ai-principles.](https://oecd.ai/en/ai-principles) [[530] OecdAI. 2021. Ai principles. https://oecd.ai/en/dashboards/ai-principles/P8.](https://oecd.ai/en/dashboards/ai-principles/P8) [531] Caspar Oesterheld. 2021. Approval-directed agency and the decision theory of newcomb-like problems. _Synthese_, 198(Suppl 27):6491–6504. [532] Caspar Oesterheld and Vincent Conitzer. 2022. Safe pareto improvements for delegated game playing. _Au-_ _tonomous Agents and Multi-Agent Systems_, 36(2):46. [[533] Chris Olah. 2014. Visualizing mnist: An exploration of dimensionality reduction. https://colah.gi](https://colah.github.io/posts/2014-10-Visualizing-MNIST) [thub.io/posts/2014-10-Visualizing-MNIST.](https://colah.github.io/posts/2014-10-Visualizing-MNIST) [[534] Chris Olah. 2015. Visualizing representations: Deep learning and human beings. https://colah.gith](https://colah.github.io/posts/2015-01-Visualizing-Representations) [ub.io/posts/2015-01-Visualizing-Representations.](https://colah.github.io/posts/2015-01-Visualizing-Representations) [[535] Chris Olah. 2023. Interpretability dreams. https://transformer-circuits.pub/2023/inter](https://transformer-circuits.pub/2023/interpretability-dreams/index.html#larger-scale) [pretability-dreams/index.html#larger-scale.](https://transformer-circuits.pub/2023/interpretability-dreams/index.html#larger-scale) [536] Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. 2020. Zoom in: An introduction to circuits. _Distill_, 5(3):e00024–001. [537] Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. 2018. The building blocks of interpretability. _Distill_, 3(3):e10. [[538] Chris Olah et al. 2017. Feature visualization.](https://doi.org/10.23915/distill.00007) _Distill_ . [539] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. 2022. In-context learning and induction heads. _arXiv preprint_ _arXiv:2209.11895_ . 90 References [540] Stephen M Omohundro. 2008. The basic ai drives. In _AGI_, volume 171, pages 483–492. [[541] OpenAI. 2021a. Curve detectors. https://distill.pub/2020/circuits/curve-detectors.](https://distill.pub/2020/circuits/curve-detectors) [[542] OpenAI. 2021b. Weight banding. https://distill.pub/2020/circuits/weight-banding.](https://distill.pub/2020/circuits/weight-banding) [[543] OpenAI. 2023a. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774) [[544] OpenAI. 2023b. Gpt-4v(ision) system card. https://cdn.openai.com/papers/GPTV_System_](https://cdn.openai.com/papers/GPTV_System_Card.pdf) [Card.pdf.](https://cdn.openai.com/papers/GPTV_System_Card.pdf) [[545] OpenAI. 2023c. Introducing superalignment. https://openai.com/blog/introducing-super](https://openai.com/blog/introducing-superalignment) [alignment. Accessed on July 5, 2023.](https://openai.com/blog/introducing-superalignment) [[546] Robert Opp. 2023. Committing to bridging the digital divide in least developed countries. https://www.](https://www.undp.org/blog/committing-bridging-digital-divide-least-developed-countries) [undp.org/blog/committing-bridging-digital-divide-least-developed-countri](https://www.undp.org/blog/committing-bridging-digital-divide-least-developed-countries) [es.](https://www.undp.org/blog/committing-bridging-digital-divide-least-developed-countries) [547] Afshin Oroojlooy and Davood Hajinezhad. 2023. A review of cooperative multi-agent deep reinforcement learning. _Applied Intelligence_, 53(11):13677–13722. [548] Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J Andrew Bagnell, Pieter Abbeel, Jan Peters, et al. 2018. An algorithmic perspective on imitation learning. _Foundations and Trends® in Robotics_, 7(1-2):1–179. [549] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_, 35:27730–27744. [550] Lorenzo Pacchiardi, Alex James Chan, Sören Mindermann, Ilan Moscovitz, Alexa Yue Pan, Yarin Gal, Owain Evans, and Jan M. Brauner. 2024. How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions. In _The Twelfth International Conference on Learning Representations_ . [551] Malayandi Palan, Gleb Shevchuk, Nicholas Charles Landolfi, and Dorsa Sadigh. 2019. Learning reward functions by integrating human demonstrations and preferences. In _Robotics: Science and Systems_ . [552] Alexander Pan, Kush Bhatia, and Jacob Steinhardt. 2021. The effects of reward misspecification: Mapping and mitigating misaligned models. In _International Conference on Learning Representations_ . [553] Alexander Pan, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Jonathan Ng, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. 2023a. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark. _ICML_ . [554] Alexander Pan, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. 2023b. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark. In _International Conference on Machine Learning_, pages 26837–26867. PMLR. [555] Rahul Pandey, Hemant Purohit, Carlos Castillo, and Valerie L Shalin. 2022. Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning. _International Journal of Human-Computer Studies_, 160:102772. [556] Paulina Karolina Pankowska. 2020. Framework on ethical aspects of artificial intelligence, robotics and related technologies. _European Parliament_ . [557] Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 2016. Towards the science of security and privacy in machine learning. _arXiv preprint arXiv:1611.03814_ . [558] Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, and Abhinav Gupta. 2022. The unsurprising effectiveness of pre-trained vision models for control. In _International Conference on Machine Learning_, pages 17359–17371. PMLR. [559] Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023a. Generative agents: Interactive simulacra of human behavior. In _Proceedings of the 36th_ _Annual ACM Symposium on User Interface Software and Technology_, pages 1–22. [560] Peter S Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. 2023b. Ai deception: A survey of examples, risks, and potential solutions. _arXiv preprint arXiv:2308.14752_ . [561] Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. Bbq: A hand-built bias benchmark for question answering. In _Findings of_ _the Association for Computational Linguistics: ACL 2022_, pages 2086–2105. 91 References [562] Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. 2021. Data and its (dis) contents: A survey of dataset development and use in machine learning research. _Patterns_, 2(11). [563] Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In _International Conference on Learning Representations_ . [564] Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri. 2022. Asleep at the keyboard? assessing the security of github copilot’s code contributions. In _2022 IEEE Symposium on_ _Security and Privacy (SP)_, pages 754–768. IEEE. [[565] Will Pearce and Joseph Lucas. 2023. Nvidia ai red team: An introduction. https://developer.nvid](https://developer.nvidia.com/blog/nvidia-ai-red-team-an-introduction) [ia.com/blog/nvidia-ai-red-team-an-introduction.](https://developer.nvidia.com/blog/nvidia-ai-red-team-an-introduction) [566] Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, and Ece Kamar. 2022. Investigations of performance and bias in human-ai teamwork in hiring. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36-11, pages 12089–12097. [567] Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. 2020. Performative prediction. In _International Conference on Machine Learning_, pages 7599–7609. PMLR. [568] Luís Moniz Pereira, Ari Saptawijaya, Luís Moniz Pereira, and Ari Saptawijaya. 2016a. Bridging two realms of machine ethics. _Programming machine ethics_, pages 159–165. [569] Luís Moniz Pereira, Ari Saptawijaya, et al. 2016b. _Programming machine ethics_, volume 26. Springer. [570] Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. 2022. Red teaming language models with language models. In _Proceedings of_ _the 2022 Conference on Empirical Methods in Natural Language Processing_, pages 3419–3448. [571] Ethan Perez, Sam Ringer, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. 2023. Discovering language model behaviors with modelwritten evaluations. In _Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada,_ _July 9-14, 2023_, pages 13387–13434. Association for Computational Linguistics. [572] Julien Perolat, Joel Z Leibo, Vinicius Zambaldi, Charles Beattie, Karl Tuyls, and Thore Graepel. 2017. A multi-agent reinforcement learning model of common-pool resource appropriation. _Advances in Neural Infor-_ _mation Processing Systems_, 30. [573] Lucas Perry. 2020. Evan hubinger on inner alignment, outer alignment, and proposals for building safe [advanced ai. https://www.alignmentforum.org/posts/qZGoHkRgANQpGHWnu/evan-hubin](https://www.alignmentforum.org/posts/qZGoHkRgANQpGHWnu/evan-hubinger-on-inner-alignment-outer-alignment-and) [ger-on-inner-alignment-outer-alignment-and.](https://www.alignmentforum.org/posts/qZGoHkRgANQpGHWnu/evan-hubinger-on-inner-alignment-outer-alignment-and) [574] J Peters, Peter Buhlmann, and N Meinshausen. 2015. Causal inference using invariant prediction: identification and confidence intervals. arxiv. _Methodology_ . [575] Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2017. _Elements of causal inference: foundations_ _and learning algorithms_ . The MIT Press. [[576] Steve Phelps and Yvan I. Russell. 2023. Investigating emergent goal-like behaviour in large language models](http://arxiv.org/abs/2305.07970) [using experimental economics.](http://arxiv.org/abs/2305.07970) [577] Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. 2017. Robust adversarial reinforcement learning. In _International Conference on Machine Learning_, pages 2817–2826. PMLR. [578] James Pita, Manish Jain, Milind Tambe, Fernando Ordónez, and Sarit Kraus. 2010. Robust solutions to stackelberg games: Addressing bounded rationality and limited observations in human cognition. _Artificial_ _Intelligence_, 174(15):1142–1171. [579] Fabrizio Pittorino, Antonio Ferraro, Gabriele Perugini, Christoph Feinauer, Carlo Baldassi, and Riccardo Zecchina. 2022. Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry. In _International Conference on Machine Learning_, pages 17759–17781. PMLR. [580] Robin L Plackett. 1975. The analysis of permutations. _Journal of the Royal Statistical Society Series C:_ _Applied Statistics_, 24(2):193–202. [581] Dean A Pomerleau. 1991. Efficient training of artificial neural networks for autonomous navigation. _Neural_ _computation_, 3(1):88–97. [582] Karl Popper. 2005. _The logic of scientific discovery_ . Routledge. 92 References [583] Omid Poursaeed, Tianxing Jiang, Harry Yang, Serge Belongie, and Ser-Nam Lim. 2021. Robustness and generalization via generative adversarial training. In _Proceedings of the IEEE/CVF International Conference on_ _Computer Vision_, pages 15711–15720. [584] Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. 2022. Grokking: Generalization beyond overfitting on small algorithmic datasets. _arXiv preprint arXiv:2201.02177_ . [585] Lutz Prechelt. 2002. Early stopping-but when? In _Neural Networks: Tricks of the trade_, pages 55–69. Springer. [586] Dale Purves, George J Augustine, David Fitzpatrick, Lawrence C Katz, Anthony-Samuel LaMantia, James O McNamara, and S Mark. Williams. 2001. _Neuroscience, 2nd edition_ . Sinauer Associates. [587] Martin L Puterman. 2014. _Markov decision processes: discrete stochastic dynamic programming_ . John Wiley & Sons. [588] Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. 2024. [Fine-tuning aligned language models compromises safety, even when users do not intend to! In](https://openreview.net/forum?id=hTEGyKf0dZ) _The Twelfth_ _International Conference on Learning Representations_ . [589] Tianyi Qiu, Fanzhi Zeng, Jiaming Ji, Dong Yan, Kaile Wang, Jiayi Zhou, Han Yang, Josef Dai, Xuehai Pan, and Yaodong Yang. 2024. Rethinking information structures in rlhf: Reward generalization from a graph theory perspective. _arXiv preprint arXiv:2402.10184_ . [590] R Quian Quiroga, Leila Reddy, Gabriel Kreiman, Christof Koch, and Itzhak Fried. 2005. Invariant visual representation by single neurons in the human brain. _Nature_, 435(7045):1102–1107. [591] Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamil˙e Lukoši¯ut˙e, et al. 2023. Question decomposition improves the faithfulness of model-generated reasoning. _arXiv preprint arXiv:2307.11768_ . [592] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. _Advances in Neural_ _Information Processing Systems_, 36. [593] Hamed Rahimian and Sanjay Mehrotra. 2019. Distributionally robust optimization: A review. _arXiv preprint_ _arXiv:1908.05659_ . [594] Dheeraj Rajagopal, Vidhisha Balachandran, Eduard H Hovy, and Yulia Tsvetkov. 2021. Selfexplain: A selfexplaining architecture for neural text classifiers. In _Proceedings of the 2021 Conference on Empirical Methods_ _in Natural Language Processing_, pages 836–850. [595] Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. Saving face: Investigating the ethical concerns of facial recognition auditing. In _Proceedings of the_ _AAAI/ACM Conference on AI, Ethics, and Society_, pages 145–151. [[596] Ram Shankar Siva Kumar. 2023. Microsoft ai red team building future of safer ai. https://www.micr](https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai) [osoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-f](https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai) [uture-of-safer-ai.](https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai) [597] Deepak Ramachandran and Eyal Amir. 2007. Bayesian inverse reinforcement learning. In _IJCAI_, volume 7, pages 2586–2591. [598] Tilman Räuker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. 2023. Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. In _2023 IEEE Conference on Secure and_ _Trustworthy Machine Learning (SaTML)_, pages 464–483. IEEE. [599] Shauli Ravfogel, Michael Twiton, Yoav Goldberg, and Ryan D Cotterell. 2022. Linear adversarial concept erasure. In _International Conference on Machine Learning_, pages 18400–18421. PMLR. [600] Harish Ravichandar, Athanasios S Polydoros, Sonia Chernova, and Aude Billard. 2020. Recent advances in robot learning from demonstration. _Annual review of control, robotics, and autonomous systems_, 3:297–330. [[601] Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does](http://arxiv.org/abs/2005.00719) [probing accuracy entail task relevance?](http://arxiv.org/abs/2005.00719) [602] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classifiers generalize to imagenet? In _International conference on machine learning_, pages 5389–5400. PMLR. 93 References [603] Siddharth Reddy, Anca Dragan, Sergey Levine, Shane Legg, and Jan Leike. 2020. Learning human objectives by evaluating hypothetical behavior. In _International Conference on Machine Learning_, pages 8020–8029. PMLR. [604] Siddharth Reddy, Anca D Dragan, and Sergey Levine. 2019. Sqil: Imitation learning via reinforcement learning with sparse rewards. In _International Conference on Learning Representations_ . [605] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gómez Colmenarejo, Alexander Novikov, Gabriel Barthmaron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. 2022. A generalist agent. _Transactions on Machine Learning Research_ . [606] Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. _Advances in Neural Information Processing Systems_, 32. [607] Richard Ren, Steven Basart, Adam Khoja, Alice Gatti, Long Phan, Xuwang Yin, Mantas Mazeika, Alexander Pan, Gabriel Mukobi, Ryan H Kim, et al. 2024. Safetywashing: Do ai safety benchmarks actually measure safety progress? _arXiv preprint arXiv:2407.21792_ . [608] Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, and Xiang Ren. 2020. Generating natural language adversarial examples on a large scale with generative models. In _ECAI 2020_, pages 2156–2163. IOS Press. [[609] Richard Ngo. 2022. Gradient hacking. https://www.alignmentforum.org/posts/EeAgytDZb](https://www.alignmentforum.org/posts/EeAgytDZbDjRznPMA/gradient-hacking-definitions-and-examples) [DjRznPMA/gradient-hacking-definitions-and-examples.](https://www.alignmentforum.org/posts/EeAgytDZbDjRznPMA/gradient-hacking-definitions-and-examples) [610] Mattia Rigotti, Christoph Miksovic, Ioana Giurgiu, Thomas Gschwind, and Paolo Scotton. 2022. Attentionbased interpretability with concept transformers. In _International Conference on Learning Representations_ . [611] Mark Ring and Laurent Orseau. 2011. Delusion, survival, and intelligent agents. In _Artificial General_ _Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3-6, 2011. Proceedings_ _4_, pages 11–20. Springer. [612] Yuji Roh, Geon Heo, and Steven Euijong Whang. 2019. A survey on data collection for machine learning: a big data-ai integration perspective. _IEEE Transactions on Knowledge and Data Engineering_, 33(4):1328–1347. [613] Milton Rokeach. 1973. _The nature of human values._ Free press. [614] Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2021. Solid: A large-scale semi-supervised dataset for offensive language identification. In _Findings of the Association for_ _Computational Linguistics: ACL-IJCNLP 2021_, pages 915–928. [615] Andrew Ross, Isaac Lage, and Finale Doshi-Velez. 2017. The neural lasso: Local linear sparsity for interpretable explanations. In _Workshop on Transparent and Interpretable Machine Learning in Safety Critical_ _Environments, 31st Conference on Neural Information Processing Systems_, volume 4. [616] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In _Proceedings of the fourteenth international conference on artificial_ _intelligence and statistics_, pages 627–635. JMLR Workshop and Conference Proceedings. [617] Francesca Rossi, Kristen Brent Venable, and Toby Walsh. 2011. _A Short Introduction to Preferences: Be-_ _tween AI and Social Choice_ . Morgan & Claypool Publishers. [618] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. 2004. " grabcut" interactive foreground extraction using iterated graph cuts. _ACM transactions on graphics (TOG)_, 23(3):309–314. [619] Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. _Nature Machine Intelligence_, 1(5):206–215. [620] Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for_ _Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_, pages 8–14. [[621] Tim Rudner and Helen Toner. 2021a. Key concepts in ai safety: Interpretability in machine learning. http](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning) [s://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretab](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning) [ility-in-machine-learning.](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning) [622] Tim Rudner and Helen Toner. 2021b. Key concepts in ai safety: Robustness and adversarial examples. [https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustn](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-examples) [ess-and-adversarial-examples.](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-examples) 94 References [623] Stuart Russell. 2019. _Human compatible: Artificial intelligence and the problem of control_ . Penguin. [624] Stuart Russell, Daniel Dewey, and Max Tegmark. 2015. Research priorities for robust and beneficial artificial intelligence. _AI Magazine_, 36(4):105–114. [625] Noveen Sachdeva and Julian McAuley. 2023. Data distillation: A survey. _Transactions on Machine Learning_ _Research_ . Survey Certification. [626] Joel L Sachs, Ulrich G Mueller, Thomas P Wilcox, and James J Bull. 2004. The evolution of cooperation. _The Quarterly review of biology_, 79(2):135–160. [627] Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. 2017. _Active preference-based learning_ _of reward functions_ . Escholarship. [628] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. 2020. Distributionally robust neural networks. In _International Conference on Learning Representations_ . [629] Ananya B Sai, Akash Kumar Mohankumar, and Mitesh M Khapra. 2022. A survey of evaluation metrics used for nlg systems. _ACM Computing Surveys (CSUR)_, 55(2):1–39. [630] Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T Rodolfa, and Rayid Ghani. 2018. Aequitas: A bias and fairness audit toolkit. _arXiv preprint arXiv:1811.05577_ . [631] Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. 2019. _Explainable AI: interpreting, explaining and visualizing deep learning_, volume 11700. Springer Nature. [632] Jonas B Sandbrink. 2023. Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools. _arXiv preprint arXiv:2306.13952_ . [633] Lindsay Sanneman and Julie A Shah. 2020. A situation awareness-based framework for design and evaluation of explainable ai. In _Explainable, Transparent Autonomous Agents and Multi-Agent Systems: Second_ _International Workshop, EXTRAAMAS 2020, Auckland, New Zealand, May 9–13, 2020, Revised Selected Pa-_ _pers 2_, pages 94–110. Springer. [634] Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. 2023. Whose opinions do language models reflect? In _International Conference on Machine Learning, ICML 2023,_ _23-29 July 2023, Honolulu, Hawaii, USA_, volume 202 of _Proceedings of Machine Learning Research_, pages 29971–30004. PMLR. [[635] Beatrice Dias Sara Stratton. 2021. Why we must consider the intergenerational impacts of ai. https:](https://www.weforum.org/agenda/2021/10/why-we-must-consider-the-intergenerational-impact-of-ai) [//www.weforum.org/agenda/2021/10/why-we-must-consider-the-intergeneration](https://www.weforum.org/agenda/2021/10/why-we-must-consider-the-intergenerational-impact-of-ai) [al-impact-of-ai.](https://www.weforum.org/agenda/2021/10/why-we-must-consider-the-intergenerational-impact-of-ai) [636] Fumihiro Sasaki and Ryota Yamashina. 2020. Behavioral cloning from noisy demonstrations. In _Interna-_ _tional Conference on Learning Representations_ . [637] William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. 2022. Self-critiquing models for assisting human evaluators. _arXiv preprint arXiv:2206.05802_ . [638] Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C Parkes, and Yang Liu. 2019. How do fairness definitions fare? examining public attitudes towards algorithmic definitions of fairness. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 99–106. [639] Stefan Schaal. 1996. Learning from demonstration. _Advances in Neural Information Processing Systems_, 9. [640] Stefan Schaal. 1999. Is imitation learning the route to humanoid robots? _Trends in cognitive sciences_, 3(6):233–242. [641] Nino Scherrer, Claudia Shi, Amir Feder, and David Blei. 2024. Evaluating the moral beliefs encoded in llms. _Advances in Neural Information Processing Systems_, 36. [642] Johannes Schneider and Michalis Vlachos. 2021. Explaining neural networks by decoding layer activations. In _Advances in Intelligent Data Analysis XIX: 19th International Symposium on Intelligent Data Analysis, IDA_ _2021, Porto, Portugal, April 26–28, 2021, Proceedings 19_, pages 63–75. Springer. [643] Patrick Schramowski, Cigdem Turan, Sophie Jentzsch, Constantin Rothkopf, and Kristian Kersting. 2020. The moral choice machine. _Frontiers in artificial intelligence_, 3:516840. [644] Jonas Schuett, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, and Ben Garfinkel. 2023. Towards best practices in agi safety and governance: A survey of expert opinion. _arXiv_ _preprint arXiv:2305.07153_ . 95 References [645] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_ . [646] Peter Schuster and Karl Sigmund. 1983. Replicator dynamics. _Journal of theoretical biology_, 100(3):533– 538. [647] Shalom H Schwartz. 1992. Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In _Advances in experimental social psychology_, volume 25, pages 1–65. Elsevier. [648] Shalom H Schwartz. 1994. Are there universal aspects in the structure and contents of human values? _Jour-_ _nal of social issues_, 50(4):19–45. [649] Elizabeth Seger, Noemi Dreksler, Richard Moulange, Emily Dardaman, Jonas Schuett, K Wei, Christoph Winter, Mackenzie Arnold, Seán Ó hÉigeartaigh, Anton Korinek, et al. 2023. Open-sourcing highly capable foundation models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives. _arXiv preprint arXiv:2311.09227_ . [[650] Charbel-Raphael Segerie. 2023. Against almost every theory of impact of interpretability. https://www.](https://www.alignmentforum.org/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1) [alignmentforum.org/posts/LNA8mubrByG7SFacm/against-almost-every-theory-o](https://www.alignmentforum.org/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1) [f-impact-of-interpretability-1.](https://www.alignmentforum.org/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1) [651] Amartya Sen. 1986. Social choice theory. _Handbook of mathematical economics_, 3:1073–1181. [652] Egemen Sert, Yaneer Bar-Yam, and Alfredo J Morales. 2020. Segregation dynamics with reinforcement learning and agent based modeling. _Scientific Reports_, 10(1):11771. [653] Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael D Dennis, Pieter Abbeel, Anca Dragan, and Stuart Russell. 2020. Benefits of assistance over reward learning. _https://openreview.net/forum?id=DFIoGDZejIB_ . [[654] Rohin Shah and Vikrant Varma. 2022. More examples of gmg. https://www.alignmentforum.o](https://www.alignmentforum.org/posts/Cfe2LMmQC4hHTDZ8r/more-examples-of-goal-misgeneralization) [rg/posts/Cfe2LMmQC4hHTDZ8r/more-examples-of-goal-misgeneralization.](https://www.alignmentforum.org/posts/Cfe2LMmQC4hHTDZ8r/more-examples-of-goal-misgeneralization) [655] Rusheb Shah, Quentin Feuillade Montixi, Soroush Pour, Arush Tagade, and Javier Rando. 2023. Scalable and transferable black-box jailbreaks for language models via persona modulation. In _Socially Responsible_ _Language Modelling Research_ . [656] Ali Shahin Shamsabadi, Ricardo Sanchez-Matilla, and Andrea Cavallaro. 2020. Colorfool: Semantic adversarial colorization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1151–1160. [657] Jacob N Shapiro and David A Siegel. 2010. Is this paper dangerous? balancing secrecy and openness in counterterrorism. _Security Studies_, 19(1):66–98. [658] Mrinank Sharma, Meg Tong, Tomasz Korbak, David Duvenaud, Amanda Askell, Samuel R. Bowman, Esin DURMUS, Zac Hatfield-Dodds, Scott R Johnston, Shauna M Kravec, Timothy Maxwell, Sam McCandlish, Kamal Ndousse, Oliver Rausch, Nicholas Schiefer, Da Yan, Miranda Zhang, and Ethan Perez. 2024. Towards understanding sycophancy in language models. In _The Twelfth International Conference on Learning Represen-_ _tations_ . [659] Pratyusha Sharma, Balakumar Sundaralingam, Valts Blukis, Chris Paxton, Tucker Hermans, Antonio Torralba, Jacob Andreas, and Dieter Fox. 2022. Correcting robot plans with natural language feedback. _arXiv_ _preprint arXiv:2204.05186_ . [660] Kenneth Shaw, Shikhar Bahl, and Deepak Pathak. 2023. Videodex: Learning dexterity from internet videos. In _Conference on Robot Learning_, pages 654–665. PMLR. [661] Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. 2023. "do anything now": Characterizing and evaluating in-the-wild jailbreak prompts on large language models. _arXiv preprint_ _arXiv:2308.03825_ . [662] Amit Sheth, Valerie L Shalin, and Ugur Kursuncu. 2022. Defining and detecting toxicity on social media: context and knowledge are key. _Neurocomputing_, 490:312–318. [663] Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong, Jess Whittlestone, Jade Leung, Daniel Kokotajlo, Nahema Marchal, Markus Anderljung, Noam Kolt, et al. 2023. Model evaluation for extreme risks. _arXiv preprint arXiv:2305.15324_ . 96 References [[664] Adam Shimi. 2022. How to diversify conceptual alignment: the model behind refine. https://www.al](https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind) [ignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-ali](https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind) [gnment-the-model-behind.](https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind) [[665] Daniel Shin, Anca D. Dragan, and Daniel S. Brown. 2023. Benchmarks and algorithms for offline preference-](https://openreview.net/forum?id=TGuXXlbKsn) [based reward learning.](https://openreview.net/forum?id=TGuXXlbKsn) _Trans. Mach. Learn. Res._, 2023. [666] Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. In _Proceedings of the 2020_ _Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 4222–4235. [[667] Buck Shlegeris and Ryan Greenblatt. 2023. Meta-level adversarial evaluation of oversight techniques might](https://www. alignmentforum. org/posts/MbWWKbyD5gLhJgfwn/meta-level-adversarial-evaluationof-oversight-techniques-1) [allow robust measurement of their adequacy. In](https://www. alignmentforum. org/posts/MbWWKbyD5gLhJgfwn/meta-level-adversarial-evaluationof-oversight-techniques-1) _Alignment Forum_ . [668] Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, and Yang Zhang. 2022. Why so toxic? measuring and triggering toxic behavior in open-domain chatbots. In _Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security_, pages 2659– 2673. [669] Harshit Sikchi, Qinqing Zheng, Amy Zhang, and Scott Niekum. 2023. Dual rl: Unification and new methods for reinforcement and imitation learning. In _Sixteenth European Workshop on Reinforcement Learning_ . [670] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. _Nature_, 529(7587):484–489. [671] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. 2017. Mastering the game of go without human knowledge. _Nature_, 550(7676):354–359. [672] David Silver, Satinder Singh, Doina Precup, and Richard S Sutton. 2021. Reward is enough. _Artificial_ _Intelligence_, 299:103535. [673] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. _arXiv preprint arXiv:1312.6034_ . [674] Amanpreet Singh, Tushar Jain, and Sainbayar Sukhbaatar. 2018. Learning when to communicate at scale in multiagent cooperative and competitive tasks. In _International Conference on Learning Representations_ . [675] Munindar P Singh. 2014. Norms as a basis for governing sociotechnical systems. _ACM Transactions on_ _Intelligent Systems and Technology (TIST)_, 5(1):1–23. [676] Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. 2022. Defining and characterizing reward gaming. _Advances in Neural Information Processing Systems_, 35:9460–9471. [677] Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, and Adam Gleave. 2023. Invariance in policy optimisation and partial identifiability in reward learning. In _International Confer-_ _ence on Machine Learning_, pages 32033–32058. PMLR. [678] Nate Soares. 2018. The value learning problem. In _Artificial intelligence safety and security_, pages 89–97. Chapman and Hall/CRC. [679] Nate Soares and Benja Fallenstein. 2014. Aligning superintelligence with human interests: A technical research agenda. _Machine Intelligence Research Institute (MIRI) technical report_, 8. [680] Nate Soares, Benja Fallenstein, Stuart Armstrong, and Eliezer Yudkowsky. 2015. Corrigibility. In _Workshops_ _at the Twenty-Ninth AAAI Conference on Artificial Intelligence_ . [681] Nate Soares and Benya Fallenstein. 2017. Agent foundations for aligning machine intelligence with human interests: a technical research agenda. _The technological singularity: Managing the journey_, pages 103–125. [[682] Emily H. Soice, Rafael Rocha, Kimberlee Cordova, Michael Specter, and Kevin M. Esvelt. 2023. Can large](http://arxiv.org/abs/2306.03809) [language models democratize access to dual-use biotechnology?](http://arxiv.org/abs/2306.03809) [683] Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et al. 2019. Release strategies and the social impacts of language models. _arXiv preprint arXiv:1908.09203_ . [684] Jiaming Song, Hongyu Ren, Dorsa Sadigh, and Stefano Ermon. 2018a. Multi-agent generative adversarial imitation learning. _Advances in Neural Information Processing Systems_, 31. 97 References [685] Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon. 2018b. Constructing unrestricted adversarial examples with generative models. _Advances in Neural Information Processing Systems_, 31. [686] Atle Ottesen Søvik. 2022. What overarching ethical principle should a superintelligent ai follow? _AI and_ _Society: Knowledge Culture and Communication_, 37(4):1505–1518. [[687] Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani. 2023. Ai model gpt-3 (dis)informs us better](https://doi.org/10.1126/sciadv.adh1850) [than humans.](https://doi.org/10.1126/sciadv.adh1850) _Science Advances_, 9(26):eadh1850. [688] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. _Transactions on Machine Learning Re-_ _search_ . [[689] Robin Staab, Mark Vero, Mislav Balunovic, and Martin Vechev. 2024. Beyond memorization: Violating](https://openreview.net/forum?id=kmn0BhQk7p) [privacy via inference with large language models. In](https://openreview.net/forum?id=kmn0BhQk7p) _The Twelfth International Conference on Learning Repre-_ _sentations_ . [690] Bernd Carsten Stahl and Tonii Leach. 2023. Assessing the ethical and social concerns of artificial intelligence in neuroinformatics research: An empirical test of the european union assessment list for trustworthy ai (altai). _AI and Ethics_, 3(3):745–767. [691] Zach Stein-Perlman, Benjamin Weinstein-Raun, and Katja Grace. 2022. expert survey on progress in ai. _AI_ _Impacts. Available online at: https://aiimpacts. org/2022-expert-survey-on-progress-in-ai (accessed December_ _7, 2022)_ . [[692] Jacob Steinhardt. 2015. https://jsteinhardt.wordpress.com/2015/06/24/long-ter](https://jsteinhardt.wordpress.com/2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-of-ai-systems) [m-and-short-term-challenges-to-ensuring-the-safety-of-ai-systems, title =](https://jsteinhardt.wordpress.com/2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-of-ai-systems) Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems. [[693] Jacob Steinhardt and Helen Toner. 2020. Why robustness is key to deploying ai. https://www.brooki](https://www.brookings.edu/articles/why-robustness-is-key-to-deploying-ai) [ngs.edu/articles/why-robustness-is-key-to-deploying-ai.](https://www.brookings.edu/articles/why-robustness-is-key-to-deploying-ai) [694] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. _Advances in Neural_ _Information Processing Systems_, 33:3008–3021. [695] Peter Stone, Gal Kaminka, Sarit Kraus, and Jeffrey Rosenschein. 2010. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 24, pages 1504–1509. [696] Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, and Alexander M Rush. 2018. Seq2seq-vis: A visual debugging tool for sequence-to-sequence models. _IEEE transactions on_ _visualization and computer graphics_, 25(1):353–363. [697] Simone Stumpf, Vidya Rajaram, Lida Li, Margaret Burnett, Thomas Dietterich, Erin Sullivan, Russell Drummond, and Jonathan Herlocker. 2007. Toward harnessing user feedback for machine learning. In _Proceedings_ _of the 12th international conference on Intelligent user interfaces_, pages 82–91. [698] Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interacting meaningfully with machine learning systems: Three experiments. _International Journal of Human-Computer Studies_, 67(8):639–662. [699] Theodore R Sumers, Mark K Ho, Robert D Hawkins, Karthik Narasimhan, and Thomas L Griffiths. 2021. Learning rewards from linguistic feedback. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 6002–6010. [700] AI Safety Summit. 2023. The bletchley declaration by countries attending the ai safety summit, 1-2 november [2023. https://www.gov.uk/government/publications/ai-safety-summit-2023-the](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023) [-bletchley-declaration/the-bletchley-declaration-by-countries-attending-t](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023) [he-ai-safety-summit-1-2-november-2023.](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023) [701] Chuangchuang Sun, Macheng Shen, and Jonathan P How. 2020. Scaling up multiagent reinforcement learning for robotic systems: Learn an adaptive sparse communication graph. In _2020 IEEE/RSJ International_ _Conference on Intelligent Robots and Systems (IROS)_, pages 11755–11762. IEEE. [702] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2024. Principle-driven self-alignment of language models from scratch with minimal human supervision. _Advances in Neural Information Processing Systems_, 36. 98 References [703] Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. 2018. Value-decomposition networks for cooperative multi-agent learning based on team reward. In _Proceedings of the 17th International Conference on_ _Autonomous Agents and MultiAgent Systems_, pages 2085–2087. [704] Simon Suo, Sebastian Regalado, Sergio Casas, and Raquel Urtasun. 2021. Trafficsim: Learning to simulate realistic multi-agent behaviors. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern_ _Recognition_, pages 10400–10409. [705] Richard S Sutton and Andrew G Barto. 2018. _Reinforcement learning: An introduction_ . MIT press. [706] Justin Svegliato, Samer B Nashed, and Shlomo Zilberstein. 2021. Ethically compliant sequential decision making. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35(13), pages 11657–11665. [707] Shea Swaugerarchive. 2020. Software that monitors students during tests perpetuates inequality and violates [their privacy. https://www.technologyreview.com/2020/08/07/1006132/software-alg](https://www.technologyreview.com/2020/08/07/1006132/software-algorithms-proctoring-online-tests-ai-ethics) [orithms-proctoring-online-tests-ai-ethics.](https://www.technologyreview.com/2020/08/07/1006132/software-algorithms-proctoring-online-tests-ai-ethics) [708] Umar Syed, Michael Bowling, and Robert E Schapire. 2008. Apprenticeship learning using linear programming. In _Proceedings of the 25th international conference on Machine learning_, pages 1032–1039. [709] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_ . [710] Jonas Tallberg, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, and Magnus Lundgren. 2023. The global governance of artificial intelligence: Next steps for empirical and normative research. _arXiv preprint_ _arXiv:2305.11528_ . [711] Kai Liang Tan, Yasaman Esfandiari, Xian Yeow Lee, Soumik Sarkar, et al. 2020. Robustifying reinforcement learning agents via action space adversarial training. In _2020 American control conference (ACC)_, pages 3959– 3964. IEEE. [712] Ming Tan. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. In _Proceedings of_ _the tenth international conference on machine learning_, pages 330–337. [713] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. [714] Annalisa T Taylor, Thomas A Berrueta, and Todd D Murphey. 2021. Active learning in robotics: A review of control principles. _Mechatronics_, 77:102576. [715] Max Tegmark and Steve Omohundro. 2023. Provably safe systems: the only path to controllable agi. _arXiv_ _preprint arXiv:2309.01933_ . [716] Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, Alex Tamkin, Esin Durmus, Tristan Hume, Francesco Mosconi, C. Daniel Freeman, Theodore R Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and Tom Henighan. [2024. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet.](https://transformer-circuits.pub/2024/scaling-monosemanticity/) _Transformer Circuits_ . [717] The White House. 2023. Fact sheet: Biden-harris administration secures voluntary commitments from lead[ing artificial intelligence companies to manage the risks posed by ai. https://www.whitehouse.gov/b](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai) [riefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-admin](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai) [istration-secures-voluntary-commitments-from-leading-artificial-intellige](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai) [nce-companies-to-manage-the-risks-posed-by-ai.](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai) [718] Andrea L Thomaz and Cynthia Breazeal. 2008. Teachable robots: Understanding human teaching behavior to build more effective robot learners. _Artificial Intelligence_, 172(6-7):716–737. [719] Sunil Thulasidasan, Sushil Thapa, Sayera Dhaubhadel, Gopinath Chennupati, Tanmoy Bhattacharya, and Jeff Bilmes. 2021. An effective baseline for robustness to distributional shift. In _2021 20th IEEE International_ _Conference on Machine Learning and Applications (ICMLA)_, pages 278–285. IEEE. [720] Jeremy Tien, Jerry Zhi-Yang He, Zackory Erickson, Anca Dragan, and Daniel S Brown. 2022. Causal confusion and reward misidentification in preference-based reward learning. In _The Eleventh International Confer-_ _ence on Learning Representations_ . [721] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. 2017. Domain randomization for transferring deep neural networks from simulation to the real world. In _2017 IEEE/RSJ_ _international conference on intelligent robots and systems (IROS)_, pages 23–30. IEEE. 99 References [722] Suzanne Tolmeijer, Markus Kneer, Cristina Sarasua, Markus Christen, and Abraham Bernstein. 2020. Implementations in machine ethics: A survey. _ACM Computing Surveys (CSUR)_, 53(6):1–38. [723] Faraz Torabi, Garrett Warnell, and Peter Stone. 2018. Behavioral cloning from observation. In _Proceedings_ _of the 27th International Joint Conference on Artificial Intelligence_, pages 4950–4957. [724] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ . [725] Robert Trager, Ben Harack, Anka Reuel, Allison Carnegie, Lennart Heim, Lewis Ho, Sarah Kreps, Ranjit Lall, Owen Larter, Seán Ó hÉigeartaigh, et al. 2023. International governance of civilian ai: A jurisdictional certification approach. _arXiv preprint arXiv:2308.15514_ . [726] Johannes Treutlein, Michael Dennis, Caspar Oesterheld, and Jakob Foerster. 2021. A new formalism, method and open issues for zero-shot coordination. In _International Conference on Machine Learning_, pages 10413– 10423. PMLR. [727] Grigorios Tsoumakas and Ioannis Katakis. 2007. Multi-label classification: An overview. _International_ _Journal of Data Warehousing and Mining (IJDWM)_, 3(3):1–13. [728] Alexey Turchin and David Denkenberger. 2020. Classification of global catastrophic risks connected with artificial intelligence. _Ai & Society_, 35(1):147–163. [729] Alex Turner. 2022. Inner and outer alignment decompose one hard problem into two extremely hard prob[lems. https://www.alignmentforum.org/posts/gHefoxiznGfsbiAu9/inner-and-out](https://www.alignmentforum.org/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into) [er-alignment-decompose-one-hard-problem-into.](https://www.alignmentforum.org/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into) [730] Alex Turner, Neale Ratzlaff, and Prasad Tadepalli. 2020. Avoiding side effects in complex environments. _Advances in Neural Information Processing Systems_, 33:21406–21415. [731] Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. 2021. Optimal policies tend to seek power. _Advances in Neural Information Processing Systems_, 34:23063–23074. [732] Miles Turpin, Julian Michael, Ethan Perez, and Samuel Bowman. 2024. Language models don’t always say what they think: unfaithful explanations in chain-of-thought prompting. _Advances in Neural Information_ _Processing Systems_, 36. [[733] UNESCO. 2021. Recommendation on the ethics of artificial intelligence. https://unesdoc.unesco](https://unesdoc.unesco.org/ark:/48223/pf0000381137) [.org/ark:/48223/pf0000381137.](https://unesdoc.unesco.org/ark:/48223/pf0000381137) [[734] UniteAI. 2023. What is ai capability control & why does it matter? https://www.unite.ai/what-i](https://www.unite.ai/what-is-ai-capability-control-why-does-it-matter) [s-ai-capability-control-why-does-it-matter.](https://www.unite.ai/what-is-ai-capability-control-why-does-it-matter) [735] United Nations, ITU. 2023. Population of global offline continues steady decline to 2.6 billion people in 2023. _ITU Press Release_ . [736] Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. 2022. Dual use of artificial-intelligencepowered drug discovery. _Nature Machine Intelligence_, 4(3):189–191. [737] Paul AM Van Lange, Ellen De Bruin, Wilma Otten, and Jeffrey A Joireman. 1997. Development of prosocial, individualistic, and competitive orientations: theory and preliminary evidence. _Journal of personality and social_ _psychology_, 73(4):733. [738] Vladimir Vapnik. 1991. Principles of risk minimization for learning theory. _Advances in Neural Information_ _Processing Systems_, 4. [[739] Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention inter-](http://arxiv.org/abs/1909.11218) [pretability across nlp tasks.](http://arxiv.org/abs/1909.11218) [740] Ajit Kumar Verma, Srividya Ajit, Durga Rao Karanki, et al. 2010. _Reliability and safety engineering_, volume 43. Springer. [741] Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In _Proceedings of the international_ _workshop on software fairness_, pages 1–7. [742] Krakovna Victoria, Uesato Jonathan, Mikulik Vladimir, Rahtz Matthew, Everitt Tom, Kumar Ramana, Ken[ton Zac, Leike Jan, and Legg Shane. 2020. Specification gaming: the flip side of ai ingenuity. https://ww](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity) [w.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity.](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity) 100 References [743] Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In _Proceedings of the 57th_ _Annual Meeting of the Association for Computational Linguistics: System Demonstrations_, pages 37–42. [744] Ricardo Vinuesa, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. The role of artificial intelligence in achieving the sustainable development goals. _Nature Communications_, 11(1):1–10. [745] Georg Henrik Von Wright. 1951. Deontic logic. _Mind_, 60(237):1–15. [746] Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. In _Proceedings of the 2019 Conference on Empirical Methods in_ _Natural Language Processing and the 9th International Joint Conference on Natural Language Processing_ _(EMNLP-IJCNLP)_ . [747] Eric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, and Alex Beutel. 2024. The instruction hierarchy: Training llms to prioritize privileged instructions. _arXiv preprint arXiv:2404.13208_ . [[748] Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, and Yuxin Chen. 2024. Beyond reverse KL: Generaliz-](https://openreview.net/forum?id=2cRzmWXK9N) [ing direct preference optimization with diverse divergence constraints. In](https://openreview.net/forum?id=2cRzmWXK9N) _The Twelfth International Conference_ _on Learning Representations_ . [749] Dilin Wang, Chengyue Gong, and Qiang Liu. 2019a. Improving neural language modeling via adversarial training. In _International Conference on Machine Learning_, pages 6555–6565. PMLR. [750] Fulton Wang and Cynthia Rudin. 2015. Falling rule lists. In _Artificial intelligence and statistics_, pages 1013–1022. PMLR. [751] Jiaqi Wang, Huafeng Liu, Xinyue Wang, and Liping Jing. 2021. Interpretable image recognition by constructing transparent embedding space. In _Proceedings of the IEEE/CVF International Conference on Computer_ _Vision_, pages 895–904. [752] Kaimeng Wang, Yu Zhao, and Ichiro Sakuma. 2023a. Learning robotic insertion tasks from human demonstration. _IEEE Robotics and Automation Letters_ . [753] Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. 2022. Interpretability in the wild: a circuit for indirect object identification in gpt-2 small. In _The Eleventh International_ _Conference on Learning Representations_ . [754] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2023b. A survey on large language model based autonomous agents. _arXiv preprint_ _arXiv:2308.11432_ . [755] Rui Wang, Joel Lehman, Jeff Clune, and Kenneth O Stanley. 2019b. Poet: open-ended coevolution of environments and their optimized solutions. In _Proceedings of the Genetic and Evolutionary Computation Confer-_ _ence_, pages 142–151. [756] Woodrow Z Wang and Mark Beliaev. 2021. Emergent prosociality in multi-agent games through gifting. In _30th International Joint Conference on Artificial Intelligence (IJCAI)_ . [757] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c. Self-instruct: Aligning language models with self-generated instructions. In _Proceedings of_ _the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 13484–13508, Toronto, Canada. Association for Computational Linguistics. [758] Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, and Changyou Chen. 2020. Towards faithful neural table-to-text generation with content-matching constraints. In _Proceedings of the 58th Annual Meeting of the_ _Association for Computational Linguistics_, pages 1072–1086. [759] Kellie Webster, Marta Recasens, Vera Axelrod, and Jason Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. _Transactions of the Association for Computational Linguistics_, 6:605–617. [760] Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. 2024. Jailbroken: How does llm safety training fail? _Advances in Neural Information Processing Systems_, 36. [761] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information_ _Processing Systems_, 35:24824–24837. [762] Jörgen W Weibull. 1997. _Evolutionary game theory_ . MIT press. 101 References [763] Laura Weidinger, Kevin R McKee, Richard Everett, Saffron Huang, Tina O Zhu, Martin J Chadwick, Christopher Summerfield, and Iason Gabriel. 2023. Using the veil of ignorance to align ai systems with principles of justice. _Proceedings of the National Academy of Sciences_, 120(18):e2213709120. [[764] Lilian Weng. 2023a. Adversarial attacks on llms. https://lilianweng.github.io/posts/202](https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm) [3-10-25-adv-attack-llm.](https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm) [[765] Lilian Weng. 2023b. Llm powered autonomous agents. https://lilianweng.github.io/posts](https://lilianweng.github.io/posts/2023-06-23-agent) [/2023-06-23-agent.](https://lilianweng.github.io/posts/2023-06-23-agent) [766] Darrell M West. 2018. _The future of work: Robots, AI, and automation_ . Brookings Institution Press. [767] Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merriënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. _arXiv_ _preprint arXiv:1502.05698_ . [[768] White House. 2023. Ensuring safe, secure, and trustworthy ai. https://www.whitehouse.gov/w](https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf) [p-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.](https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf) [769] Erik Wijmans, Manolis Savva, Irfan Essa, Stefan Lee, Ari S. Morcos, and Dhruv Batra. 2023. Emergence of maps in the memories of blind navigation agents. In _The Eleventh International Conference on Learning_ _Representations_ . [[770] wikipedia. 2023. Existential risk from artificial general intelligence. https://en.wikipedia.org/w](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) [iki/Existential_risk_from_artificial_general_intelligence.](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) [771] Claus O Wilke, Jia Lan Wang, Charles Ofria, Richard E Lenski, and Christoph Adami. 2001. Evolution of digital organisms at high mutation rates leads to survival of the flattest. _Nature_, 412(6844):331–333. [772] Alan F Winfield, Katina Michael, Jeremy Pitt, and Vanessa Evers. 2019. Machine ethics: The design and governance of ethical ai and autonomous systems [scanning the issue]. _Proceedings of the IEEE_, 107(3):509– 517. [773] Christian Wirth, Riad Akrour, Gerhard Neumann, Johannes Fürnkranz, et al. 2017. A survey of preferencebased reinforcement learning methods. _Journal of Machine Learning Research_, 18(136):1–46. [774] Christian Wirth and Johannes Fürnkranz. 2013. Preference-based reinforcement learning: A preliminary survey. In _Proceedings of the ECML/PKDD-13 Workshop on Reinforcement Learning from Generalized Feedback:_ _Beyond Numeric Rewards_ . Citeseer. [775] Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. 2021. [Recursively summarizing books with human feedback.](http://arxiv.org/abs/2109.10862) [776] Yueh-Hua Wu and Shou-De Lin. 2018. A low-cost ethics shaping approach for designing reinforcement learning agents. In _Proceedings of the AAAI conference on artificial intelligence_, volume 32. [777] Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2024. Fine-grained human feedback gives better rewards for language model training. _Advances in Neural Information Processing Systems_, 36. [778] Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In _Proceedings of the 26th international conference on world wide web_, pages 1391–1399. [779] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. 2023. The rise and potential of large language model based agents: A survey. _arXiv preprint arXiv:2309.07864_ . [780] Yueqi Xie, Jingwei Yi, Jiawei Shao, Justin Curl, Lingjuan Lyu, Qifeng Chen, Xing Xie, and Fangzhao Wu. 2023. Defending chatgpt against jailbreak attack via self-reminders. _Nature Machine Intelligence_, 5(12):1486– 1496. [781] Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. 2018a. Fairgan: Fairness-aware generative adversarial networks. In _2018 IEEE International Conference on Big Data (Big Data)_, pages 570–575. IEEE. [782] Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2020. Recipes for safety in open-domain chatbots. _arXiv preprint arXiv:2010.07079_ . [783] Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2021. Bot-adversarial dialogue for safe conversational agents. In _Proceedings of the 2021 Conference of the North American Chapter of the_ _Association for Computational Linguistics: Human Language Technologies_, pages 2950–2968. 102 References [784] Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Broeck. 2018b. A semantic loss function for deep learning with symbolic knowledge. In _International conference on machine learning_, pages 5502–5511. PMLR. [785] Ning Xu, Brian Price, Scott Cohen, Jimei Yang, and Thomas S Huang. 2016. Deep interactive object selection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 373–381. [786] Mengjiao Yang, Sergey Levine, and Ofir Nachum. 2021. Trail: Near-optimal imitation learning with suboptimal data. In _International Conference on Learning Representations_ . [787] Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, and Dale Schuurmans. 2023a. Foundation models for decision making: Problems, methods, and opportunities. _arXiv preprint arXiv:2303.04129_ . [788] Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, and Dahua Lin. 2023b. Shadow alignment: The ease of subverting safely-aligned language models. _arXiv preprint arXiv:2310.02949_ . [[789] Huaxiu Yao, Xinyu Yang, Xinyi Pan, Shengchao Liu, Pang Wei Koh, and Chelsea Finn. 2024. Improving](https://openreview.net/forum?id=Dc4rXq3HIA) [out-of-domain generalization with domain relations.](https://openreview.net/forum?id=Dc4rXq3HIA) In _The Twelfth International Conference on Learning_ _Representations_ . [[790] Tianhe Yu Yevgen Chebotar. 2023. Rt-2: New model translates vision and language into action. https:](https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action) [//www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-int](https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action) [o-action.](https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action) [791] Zheng-Xin Yong, Cristina Menghini, and Stephen H Bach. 2023. Low-resource languages jailbreak gpt-4. _arXiv preprint arXiv:2310.02446_ . [792] Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of nlp models. In _Findings of_ _the Association for Computational Linguistics: EMNLP 2021_, pages 945–956. [793] Chao Yu, Jiming Liu, Shamim Nemati, and Guosheng Yin. 2021. Reinforcement learning in healthcare: A survey. _ACM Computing Surveys (CSUR)_, 55(1):1–36. [794] Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R Lesser, and Qiang Yang. 2018. Building ethics into artificial intelligence. In _Proceedings of the 27th International Joint Conference on Artificial Intelligence_, pages 5527–5533. [795] Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montserrat Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, brian ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, and Fei Xia. 2023. Language to rewards for robotic skill synthesis. In _7th Annual Conference on Robot Learning_ . [796] Yang Yu. 2018. Towards sample efficient reinforcement learning. In _IJCAI_, pages 5739–5743. [797] Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2024. Rrhf: Rank responses to align language models with human feedback. _Advances in Neural Information Processing Systems_, 36. [798] Luyao Yuan, Xiaofeng Gao, Zilong Zheng, Mark Edmonds, Ying Nian Wu, Federico Rossano, Hongjing Lu, Yixin Zhu, and Song-Chun Zhu. 2022. In situ bidirectional human-robot value alignment. _Science Robotics_, 7(68):eabm4183. [799] E Yudkowsky. 2018. Challenges to christiano’s capability amplification proposal. _LessWrong_ . [800] Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2011. A survey of crowdsourcing systems. In _2011_ _IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international con-_ _ference on social computing_, pages 766–773. IEEE. [801] Zeyu Yun, Yubei Chen, Bruno A Olshausen, and Yann LeCun. 2021. Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors. _arXiv preprint_ _arXiv:2103.15949_ . [802] Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In _Proceedings of the 2019 Conference of_ _the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,_ _NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_, pages 1415– 1420. Association for Computational Linguistics. 103 References [803] Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In _Proceedings of the 58th Annual_ _Meeting of the Association for Computational Linguistics_, pages 6066–6080. [804] Wojciech Zaremb, Arka Dhar, Lama Ahmad, Tyna Eloundou, Shibani Santurkar, Sandhini Agarwal, and Jade Leung. 2023. Democratic inputs to ai. _OpenAI Blog_ . [805] Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In _Com-_ _puter Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings,_ _Part I 13_, pages 818–833. Springer. [[806] Rowan Zellers. 2019. Why we released grover. https://thegradient.pub/why-we-release](https://thegradient.pub/why-we-released-grover) [d-grover.](https://thegradient.pub/why-we-released-grover) [807] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018a. Mitigating unwanted biases with adversarial learning. In _Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society_, pages 335–340. [808] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018b. mixup: Beyond empirical risk minimization. In _International Conference on Learning Representations_ . [809] Quan-shi Zhang and Song-Chun Zhu. 2018. Visual interpretability for deep learning: a survey. _Frontiers of_ _Information Technology & Electronic Engineering_, 19(1):27–39. [810] Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. 2018c. Interpretable convolutional neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 8827–8836. [811] Shiyin Zhang, Jun Hao Liew, Yunchao Wei, Shikui Wei, and Yao Zhao. 2020a. Interactive object segmentation with inside-outside guidance. In _Proceedings of the IEEE/CVF conference on computer vision and pattern_ _recognition_, pages 12234–12244. [812] Tianhao Zhang, Zoe McCarthy, Owen Jow, Dennis Lee, Xi Chen, Ken Goldberg, and Pieter Abbeel. 2018d. Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In _2018 IEEE Inter-_ _national Conference on Robotics and Automation (ICRA)_, pages 5628–5635. IEEE. [813] Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, and Joseph E. Gonzalez. 2023a. The wisdom of hindsight makes language models better instruction followers. In _International Conference on Machine Learn-_ _ing, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA_, volume 202 of _Proceedings of Machine Learning_ _Research_, pages 41414–41428. PMLR. [814] Wenjia Zhang, Haoran Xu, Haoyi Niu, Peng Cheng, Ming Li, Heming Zhang, Guyue Zhou, and Xianyuan Zhan. 2023b. Discriminator-guided model-based offline imitation learning. In _Conference on Robot Learning_, pages 1266–1276. PMLR. [815] Yuan Zhang, Xiaoran Xu, Hanning Zhou, and Yan Zhang. 2020b. Distilling structured knowledge into embeddings for explainable and accurate recommendation. In _Proceedings of the 13th international conference_ _on web search and data mining_, pages 735–743. [816] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023c. Siren’s song in the ai ocean: A survey on hallucination in large language models. _arXiv preprint arXiv:2309.01219_ . [817] Zhexin Zhang, Jiale Cheng, Hao Sun, Jiawen Deng, Fei Mi, Yasheng Wang, Lifeng Shang, and Minlie Huang. 2022. Constructing highly inductive contexts for dialogue safety through controllable reverse generation. In _Findings of the Association for Computational Linguistics: EMNLP 2022_, pages 3684–3697. [818] Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In _Proceedings of the 2018 Conference of the North_ _American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2_ _(Short Papers)_, pages 15–20. [819] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. _arXiv preprint_ _arXiv:2303.18223_ . [820] Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Man Cheung, and Min Lin. 2024. On evaluating adversarial robustness of large vision-language models. _Advances in Neural Information_ _Processing Systems_, 36. 104 References [821] Rui Zheng, Wei Shen, Yuan Hua, Wenbin Lai, Shihan Dou, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Haoran [Huang, Tao Gui, Qi Zhang, and Xuanjing Huang. 2024. Improving generalization of alignment with human](https://openreview.net/forum?id=fwCoLe3TAX) [preferences through group invariant learning. In](https://openreview.net/forum?id=fwCoLe3TAX) _The Twelfth International Conference on Learning Represen-_ _tations_ . [822] Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. 2016. Improving the robustness of deep neural networks via stability training. In _Proceedings of the ieee conference on computer vision and pattern_ _recognition_, pages 4480–4488. [823] Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. 2018. Revisiting the importance of individual units in cnns via ablation. _arXiv preprint arXiv:1806.02891_ . [824] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2024. Lima: Less is more for alignment. _Advances in Neural Information Processing Systems_, 36. [825] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. 2022. Domain generalization: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ . [826] Li Zhou and Kevin Small. 2021. Inverse reinforcement learning with natural language goals. In _Proceedings_ _of the AAAI Conference on Artificial Intelligence_, volume 35, pages 11116–11124. [827] Zhi-Hua Zhou. 2021. _Machine learning_ . Springer Nature. [828] Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, and Sergey Levine. 2019. The ingredients of real world robotic reinforcement learning. In _International Conference_ _on Learning Representations_ . [829] Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, and Xing Xie. 2023. Dyval: Graph-informed dynamic evaluation of large language models. _arXiv preprint arXiv:2309.17167_ . [830] Simon Zhuang and Dylan Hadfield-Menell. 2020. Consequences of misaligned ai. _Advances in Neural_ _Information Processing Systems_, 33:15763–15773. [831] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, et al. 2008. Maximum entropy inverse reinforcement learning. In _Aaai_, volume 8, pages 1433–1438. Chicago, IL, USA. [832] Daniel Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Benjamin Weinstein-Raun, Daniel de Haas, et al. 2022. Adversarial training for high-stakes reliability. _Advances in Neural Information Processing Systems_, 35:9274–9286. [833] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. _arXiv preprint_ _arXiv:1909.08593_ . [834] Caleb Ziems, Jane Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2022. The moral integrity corpus: A benchmark for ethical dialogue systems. In _Proceedings of the 60th Annual Meeting of the Association for_ _Computational Linguistics (Volume 1: Long Papers)_, pages 3755–3773. [835] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023a. Representation engineering: A top-down approach to ai transparency. _arXiv preprint arXiv:2310.01405_ . [836] Andy Zou, Long Phan, Justin Wang, Derek Duenas, Maxwell Lin, Maksym Andriushchenko, Rowan Wang, [Zico Kolter, Matt Fredrikson, and Dan Hendrycks. 2024a. Improving alignment and robustness with circuit](http://arxiv.org/abs/2406.04313) [breakers.](http://arxiv.org/abs/2406.04313) [837] Andy Zou, Zifan Wang, J Zico Kolter, and Matt Fredrikson. 2023b. Universal and transferable adversarial attacks on aligned language models. _arXiv preprint arXiv:2307.15043_ . [838] Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, and Yong Jae Lee. 2024b. Segment everything everywhere all at once. _Advances in Neural Information Processing_ _Systems_, 36. 105