id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
80a0afd4-fb19-4902-9276-ce6370e9552d
trentmkelly/LessWrong-43k
LessWrong
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda Crossposted from the EA forum You can read this post as a google docs instead (IMO much better to read). This document aims to clarify the AI safety research agenda by Paul Christiano (IDA) and the arguments around how promising it is. Target audience: All levels of technical expertise. The less knowledge about IDA someone has, the more I expect them to benefit from the writeup. Writing policy: I aim to be as clear and concrete as possible and wrong rather than vague to identify disagreements and where I am mistaken. Things will err on the side of being too confidently expressed. Almost all footnotes are content and not references. Epistemic Status: The document is my best guess on IDA and might be wrong in important ways. I have not verified all of the content with somebody working on IDA. I spent ~4 weeks on this and have no prior background in ML, CS or AI safety. I wrote this document last summer (2019) as part of my summer research fellowship at FHI. I was planning to restructure, complete and correct it since but haven’t gotten to it for a year, so decided to just publish it as it is. The document has not been updated, i.e. nothing that has been released since September 2019 is incorporated into this document. Paul Christiano generously reviewed the first third to a half of this summary. I added his comments verbatim in the document. Apologies for the loss of readability due to this. This doesn’t imply he endorses any part of this document, especially the second half which he didn't get to review. Purpose of this document: Clarifying IDA IDA is Paul Christiano’s AI safety research agenda.[1] Christiano works at OpenAI which is one of the main actors in AI safety and IDA is by many considered the most complete[2] AI safety agenda. However, people who are not directly working on IDA are often confused about how exactly to understand the agenda. Clarifying IDA would make it more accessible for technical people to work on and easier to assess for nontechn
95839d50-8768-4e5a-b609-720e1e07c8fe
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Agents vs. Predictors: Concrete differentiating factors *Thanks to Paul Christiano and Kate Woolverton for useful conversations and feedback.* In "[Conditioning Predictive Models](https://www.alignmentforum.org/s/n3utvGrgC2SGi9xQX)," we devote a lot of effort into trying to understand how likely predictive models are compared to other alternatives in realistic training regimes (and if we do get a predictive model how we might align it). Here, I want to point to some very concrete behavioral differences that I think effectively differentiate predictive and non-predictive models both in theory and (hopefully) in practice as well. I think that thinking about predictive models as specifically those models that are sensitive to what I'll be talking about here can be quite clarifying, and I think these factors could present a clear pathway for getting empirical evidence on to what extent models are well-described as predictors in practice—e.g. as it relates to the [RLHF conditioning hypothesis](https://www.alignmentforum.org/posts/qoHwKgLFfPcEuwaba/conditioning-predictive-models-making-inner-alignment-as#The_RLHF_conditioning_hypothesis). Furthermore, a common objection I sometimes hear to thinking about predictive models in particular (and looking into the dangers from them as a relatively separate problem) is that a model being itself deceptively aligned vs. predicting what a deceptively aligned model would do are essentially the same. While I think they are both similarly dangerous when acting directly in the world, I think they are pretty substantially different failure modes in terms of how you might go about addressing them and what sort of levers you have to control and detect them—and I think that the concrete differentiating factors here should make it pretty clear how I think they differ. Sensitivity to distributional information ========================================= Consider two AI models. Model A is a deceptively aligned agent; it has some long-term goal that it cares about and is playing the training game now to try to eventually achieve that goal later. Model B is a predictor, but a [poorly conditioned one](https://www.alignmentforum.org/posts/3kkmXfvCv9DmT3kwx/conditioning-predictive-models-outer-alignment-via-careful#2c__Major_challenge__Predicting_other_AI_systems) such that it is attempting to predict what it thinks a deceptively aligned agent would do conditional on being in a training regime that matches the model's current training regime. Now, suppose you give both models some concrete information about the general distribution of deceptively aligned AIs. For example, you provide definitive proof that most deceptively aligned AIs trained in similar training regimes end up with goal X. Model A, the actual deceptive agent, doesn't really care about this new information very much: maybe it's relevant to the extent that it might need to cooperate with other AIs, but it's not going to directly change the model's goals. Model B, on the other hand, is extremely sensitive to reliable information on this topic: if you can really convince it that most deceptively aligned AIs in similar situations will end up valuing X, it should shift to predicting a deceptively aligned AI that values X instead of whatever goals it used to think a deceptively aligned AI might have. In this particular example, "does the model have goal X" isn't something we know how to directly measure for a deceptively aligned model, so this particular case isn't useful as a way to test for agents vs. predictors, but it is a concrete way in which these two different sorts of models are fundamentally not the same and will eventually behave differently because of that. That being said, I think there are ways in which you might be able to test this sort of sensitivity to distributional information that would be possible to check and evaluate. For example, suppose you're trying to distinguish between a predictive model generating from a distribution of things it thinks helpful AIs would say vs. an agent trying to act helpful. If you then give that sort of a model concrete information about what most helpful AIs tend to do, that sort of information should shift the predictor more than it shifts the agent. Unfortunately, such information should still shift the agent to some extent, since what other helpful agents do is evidence about what is in fact helpful. As a result, this sort of test is primarily useful for providing *relative* information about how likely different models are to be predictors rather than *absolute* information. That being said, this sort of a test might still be useful for testing which of some set of models are most and least likely to be agents vs. predictors (which could be quite relevant in the case of testing something like the [RLHF conditioning hypothesis](https://www.alignmentforum.org/posts/qoHwKgLFfPcEuwaba/conditioning-predictive-models-making-inner-alignment-as#The_RLHF_conditioning_hypothesis)). Furthermore, another tricky thing to doing differentiation based on sensitivity to distributional information in practice is that you have to actually convince the model of your information. If the model doesn't actually believe the information you're giving it is in fact real information about the distribution it's generating from, then you won't see any effect here. Distributional aggregation ========================== In addition to sensitivity to additional information about the distribution being predicted, another key differentiating factor between predictors and agents is just that the predictor is doing aggregation across a distribution at all. Conceptually, in the case where the predictor is generating from the distribution of deceptive AIs, the predictor is aggregating across that distribution whereas the agent is a sample from that distribution. This has some pretty concrete implications. In particular, suppose you attempt a honeypot test on each of these models, where you attempt to see if they're willing to reveal their deception in some particular situation where you've tried to convince the model that it's actually in deployment rather than testing. Furthermore, suppose 99% of all deceptive models will be able to tell that this is a honeypot and won't fall for it. In that situation, 99% of the time you'll see no change in the behavior of the agent, but if you should essentially always see a change in the predictor's distribution, since if its output distribution reflects the posterior on that whole distribution, and if 1% of that distribution will show their true colors in that honeypot, then that's still a sizable effect on the posterior that should be evident, especially if you take many samples from such a predictor. Conclusion ========== Overall, my sense is that the form of deceptive alignment where you have a predictive model predicting what a deceptively aligned AI would do is substantially more addressable than the [standard deceptive alignment scenario](https://www.alignmentforum.org/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment), precisely because the concrete differentiating factors above give you additional levers to control and test the predictive model that you don't have in the agentic scenario. While my guess is that the overall existential risk from predictive models predicting deceptive AIs is less than the overall existential risk from the standard scenario, I think it is a substantial risk that seems substantially more addressable if we're careful about how we condition and test such models.
1c953eb2-e55b-4f5f-a021-481a87578a2f
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Part 3: A Proposed Approach for AI Safety Movement Building: Projects, Professions, Skills, and Ideas for the Future [long post][bounty for feedback] Epistemic status ================ Written as a non-expert to develop my views and get feedback, rather than persuade.  Why read this post? =================== Read this post for * a **theory of change for AI safety movement building**, including measuring key outcomes of contributors, contributions, and coordination to determine if we are succeeding * activities to realise the promise of this theory of change, focused on (1) building shared understanding, (2) evaluating progress (3) coordinating workers and projects, (4) prioritising projects to support, and (5) changing how we do related movement building * examples of how technical and non-technical skills are both needed for AI safety * an explanation of a new idea, "fractional movement building", where most people working in AI safety spend some fraction of their time building the movement (i.e. by splitting time between a normal job and movement-building, or taking time off to do a "[tour of service](https://forum.effectivealtruism.org/posts/waeDDnaQBTCNNu7hq/ea-tours-of-service)") * ideas for how we might, evaluate, prioritise and scale AI Safety movement building. I also ask for your help in providing feedback and answering questions I have about this approach. Your answers will help me and others make better career decisions about AI safety, including how I should proceed with this movement building approach. **I am offering cash bounties of $20 for the most useful inputs** (see section at the bottom of the post). Summary ======= If we want to more effectively address problems of AI safety, alignment, and governance, we need better i) shared understanding of the challenges, better coordination of people and resources, and ii) prioritisation.  AI Safety Movement Building activities are currently mired in uncertainty and confusing. It's not clear who can or should be a contributor; it's not clear what contributions people in the community are making or value; and it's not clear how to best coordinate action at scale. In response, this post is the third in a series that outlines a theory of change for AI Safety Movement Building. [Part 1](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD/p/5iQoR8mhEpvRT43jv) introduced the **context**of the AI Safety community, and its four work groups: Strategy, Governance, Technical, and Movement Building.  [Part 2](https://forum.effectivealtruism.org/posts/YMvSZi2EWxNHwFtbb/part-2-ai-safety-movement-builders-should-help-the-community) proposed measurable **outcomes**that AI Safety Movement Building should try to improve: *Contributors*: individuals making positive contributions to AI Safety; *Contributions*: positive inputs into AI Safety work; and, *Coordination*: understanding and aligning contributions to ensure maximal efficiency.  In this post, I describe a **process** for AI Safety Movement Builders that can be implemented in this context, to produce these outcomes. The process is thus: Technical and non-technical AI Safety movement builders, including those working in a fractional capacity, [start, sustain or scale](https://docs.google.com/document/d/14-YF_tioUvcyskQHbpJcGFNnwOVhOx8tWqrlcRsxIfc/edit#heading=h.bgu1qal6i71o) AI Safety projects. They mainly do this by providing access [human resources](https://docs.google.com/document/d/14-YF_tioUvcyskQHbpJcGFNnwOVhOx8tWqrlcRsxIfc/edit#heading=h.6yjhj1iyiail) (i.e., needed professions and skills). They exercise caution by [validating their assumptions that a project will be useful as early as possible](https://docs.google.com/document/d/14-YF_tioUvcyskQHbpJcGFNnwOVhOx8tWqrlcRsxIfc/edit#heading=h.jtwj9n965yph). These concepts and the process are elaborated on in the full post. My key recommendations for people and organisations working on AI Safety and movement building are: * [Evaluate how well AI Safety movement building is achieving key outcomes such as increasing the number and quality of contributors, improving contributions, or helping coordination of work](https://docs.google.com/document/d/14-YF_tioUvcyskQHbpJcGFNnwOVhOx8tWqrlcRsxIfc/edit#heading=h.fbgq71ljg262) * [Support or lead work to improve shared understanding of the main challenges in AI Safety, to improve prioritisation, and speed of progress](https://docs.google.com/document/d/14-YF_tioUvcyskQHbpJcGFNnwOVhOx8tWqrlcRsxIfc/edit#heading=h.z6e63moxewjr) * [Consider a fractional movement building approach where everyone doing direct work has some of their time allocated to growing the community.](https://docs.google.com/document/d/14-YF_tioUvcyskQHbpJcGFNnwOVhOx8tWqrlcRsxIfc/edit#heading=h.9t492g3eux8e)   Why did I write this series of posts? ===================================== There is uncertainty about what AI Safety Movement Building is, and how to do it helpfully ------------------------------------------------------------------------------------------ [AI Safety is a pre-paradigmatic area of research](https://www.alignmentforum.org/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field#:~:text=AI%20safety%20is%20a%20pre,and%20how%20to%20approach%20it.)and practice which recently emerged to address an impending, exceptionally pressing, societal problem. This has several implications:  * The emergent community is relatively lacking in established traditions, norms, maturity and scientific consensus. * Most community members' focus and effort have gone into understanding problems they are personally interested in rather than community wide considerations. * Most knowledge is in the heads of community members and \*relatively\* (e.g., compared to an established research domain) little information has been made easily legible and accessible to new entrants to the community (e.g., see various posts calling for, and providing more of such information: [1](https://www.lesswrong.com/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field), [2](https://forum.effectivealtruism.org/posts/QWuKM5fsbry8Jp2x5/why-people-want-to-work-on-ai-safety-but-don-t)). * Most people in and outside AI safety, have relatively poor information on, and understanding, of what other relevant people and groups think (e.g., [1](https://www.facebook.com/story.php?story_fbid=pfbid0kSouvndP39s2rbPjLWtdkFWm91E3HHuwLuiKiQiqVfin1QrDJHXKEzvjBvPxGm7bl&id=509414227&post_id=509414227_pfbid0kSouvndP39s2rbPjLWtdkFWm91E3HHuwLuiKiQiqVfin1QrDJHXKEzvjBvPxGm7bl&mibextid=Nif5oz)) As a result of these and other factors, there is considerable uncertainty about which outcomes are best to pursue, and which processes and priorities are best for achieving these outcomes.  This uncertainty is perhaps particularly pronounced in the smaller and newer sub-field of AI Safety Movement building (e.g., [1](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety?commentId=vFyf4Je8YN4k4rsAL)).  This may be partially due to differences in perspective on Movement Building across the emerging AI Safety community. Most participants are from diverse backgrounds: technical and non-technical researchers, mathematicians, rationalists, effective altruists, philosophers, policymakers, entrepreneurs. They are not groups of people who share an innate conceptualisation of what ‘Movement Building’ is or what it should involve.  Regardless, the uncertainty about what AI Safety Movement Building is, and how to do it helpfully, creates several problems. Perhaps most significantly, it probably reduces the number of people who consider, and select, AI Safety Movement Building as a career option. Uncertainty about AI Safety Movement Building reduces the number of potentially helpful movement builders --------------------------------------------------------------------------------------------------------- People who care about AI safety and want to help with movement building are aware of two conflicting facts: i) that the AI Safety community badly needs more human resources and ii) that there are significant risks from doing movement building badly (e.g., [1](https://www.lesswrong.com/posts/bkpZHXMJx3dG5waA7/ways-to-buy-time?commentId=jropYhtAW72zfHRBr),[2](https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research-more-people-should-focus-on?commentId=fEfLqfaLtnwDPYstf#comments)).  To determine if they should get involved, they ask questions like: Which movement building projects are generally considered good to pursue for someone like me and which are bad? What criteria should I evaluate projects on? What skills do I need to succeed? If I leave my job would I be able to get funding to work on (Movement Building Idea X)?   It’s often hard to get helpful answers to these questions. There is considerable variance in vocabulary, opinions, visions and priorities about AI Safety Movement Building within the AI Safety community. If you talk to experts or look online you will get very mixed opinions. Some credible people will claim that a particular movement building project is good, while others will claim the opposite.  There is also a problem of fragmented and abstract discussion: Most discussions happen in informal contexts that are difficult to discover or synthesise, such as Facebook, Twitter, Slack, in-person conversations, or in the comments of three different forums(!). There is limited discussion of specifics, for instance, what specifically is good or bad movement building, or lowest or highest priority to do. Instead, discussions are vague or kept at a high level, such as the kinds of norms, constraints, or values that are (un)desirable in the AI Safety community. As an example, memes and sentiments like the image below are commonly seen. These don’t specify what they’re critiquing or provide a good example of what is acceptable or desirable in response. They serve as a sort of indefinite expression of concern that can’t easily be interpreted or mitigated.    ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/8XZmu8BM5JBtSnHiP/undynuvplj1otcomc7wv) An example of an indefinite expression of concern: [Source](https://www.facebook.com/groups/OMfCT/posts/3411877719127122/)   Because of the lack of coherence in discussions and abstract nature of concerns, it is very hard for movement builders to evaluate the risk and rewards of potential actions against the risks of inaction. This creates uncertainty which is generally unpleasant and promotes inaction. The consequence of this uncertainty is that fewer potential AI Safety Movement Builders will choose to engage in AI Safety Movement Building over competing options with more clearly defined processes and outcomes (e.g., direct work, their current job, or other impact focused roles).  Having fewer movement builders probably reduces AI Safety contributors, contributions and collective coordination ----------------------------------------------------------------------------------------------------------------- Let’s explore some arguments for why movement builders are important for key outcomes for the AI Safety Community. **The lack of AI Safety movement builders probably reduces the number of CONTRIBUTORS to AI Safety by reducing awareness and ability to get involved.**I believe that the vast majority of people who could work on AI safety don’t have a good understanding of the core arguments and opportunities.Of those who do, many [don’t know how to get involved and therefore don’t even try](https://www.lesswrong.com/posts/XkmG8XGf6uhXLmZN7/so-you-think-you-re-not-qualified-to-do-technical-alignment). Many who know how to get involved, erroneously think that they can’t contribute. It’s much harder to get involved than it should be: [the effort to understand whether / how to help](https://forum.effectivealtruism.org/posts/QWuKM5fsbry8Jp2x5/why-people-want-to-work-on-ai-safety-but-don-t)is unreasonably high. **The lack of movement builders probably reduces CONTRIBUTIONS to AI Safety by making it harder for new and current contributors to collaborate on the most important research topics**. Researchers don’t know what is best to research. For instance, Eli Tyre [argues](https://musingsandroughdrafts.com/2023/02/17/my-current-summary-of-the-state-of-ai-risk/) that most new researchers “are retreading old ideas, without realizing it”. Researchers don’t seem to know much about what other researchers are doing and why. When [Nate Soares talks about how AI safety researchers don’t stack](https://www.lesswrong.com/posts/4ujM6KBN4CyABCdJt/ai-alignment-researchers-don-t-seem-to-stack%20%20Reference%20in%20article%20-%20one%20reason%20people%20don't%20stack%20is%20because%20most%20information%20is%20in%20people's%20heads,%20like%20early%20academia) I wonder how much of this is due to a lack of shared language and understanding. Researchers don’t get much support. When [Oliver Harbyka says](https://www.lesswrong.com/posts/4NFDwQRhHBB2Ad4ZY/the-filan-cabinet-podcast-with-oliver-habryka-transcript): “being an independent AI alignment researcher is one of the [jobs] with, like, the worst depression rate in the world”, I wonder how much of that is due to insufficient support for mental and physical needs.  **The lack of movement builders probably reduces coordination because it leads to a lower level of shared language and understanding.**Without the support of movement builders there is no one in the AI Safety community responsible for creating shared language and understanding inside and outside the community: reviewing, synthesising and communicating AI Safety community member’s work, needs and values to relevant groups (within and outside the community). Without this, the different parts of AI Safety community and outside stakeholders are less well able to understand and communicate what they collectively do or agree/disagree on, and to make efficient progress towards their goals. **But what if current movement building is harmful?**Some people in the community have concerns about current work on movement building (e.g., [1](https://www.lesswrong.com/posts/bkpZHXMJx3dG5waA7/ways-to-buy-time?commentId=jropYhtAW72zfHRBr),[2](https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/instead-of-technical-research-more-people-should-focus-on?commentId=fEfLqfaLtnwDPYstf#comments),[3](https://www.lesswrong.com/posts/psYNRb3JCncQBjd4v/shutting-down-the-lightcone-offices#comments)), many of which I share. However, these concerns do not strike me as compelling reasons to stop movement building. Instead, they strike me as reasons to improve how we do movement building by better understanding the needs and values of the wider community.  For instance, right now, to ensure we have movement builders who  * Survey the AI Safety community and communicate its collective values and concerns. * Research how fast the community is growing and why and facilitate productive conversation around if/how that should change * Synthesise different viewpoints, identify differences, and progress the resolution of debates * Help the existing community to collaborate and communicate more effectively. However, before we can understand how to do movement building better, we need to address an underlying bottleneck - a lack of shared language and understanding.   To address the uncertainty and related problems we need a better shared language and shared understanding --------------------------------------------------------------------------------------------------------- To address the uncertainty and related problems, the AI Safety (Movement Building) community appears to need a better *shared language*, and *shared understanding*. By a *shared language*, I mean a shared set of concepts which are consistently used for relevant communication. These shared concepts are helpful categorisations for differentiating otherwise confusing and complex phenomena (e.g., key parts, process, or outcomes within the community). Some adjacent examples are the EA cause areas, the Importance, Tractability and Neglectedness (ITN) framework and the 80,000 Hours priority careers. A shared language offers potential for a *shared understanding*. By a *shared understanding*, I mean a situation where the collective understanding of key variables (e.g., our context, processes, desired outcome, and priorities) is clear enough that it is very well understood ‘who needs to do what, when and why’. An adjacent example within the EA community, is the 80,000 Hours priority career paths list which provide a relatively clear understanding of which careers are the highest priority from EA perspective and why. I argued earlier that variance in vocabulary, opinions, visions and priorities about AI Safety Movement Building contributed to uncertainty. A *shared language*can help address \*some\* of this uncertainty by standardising vocabulary and conceptual frameworks. I argued earlier that uncertainty stems from limited discussion of specifics, for instance, what specifically is good or bad AI Safety Movement Building, or lowest or highest priority to do. Creating a *shared understanding*can help address \*some\* of this uncertainty and reduce the related problems by standardising i) the collective understanding of key variables (e.g., context, process, desired outcomes), and, later, ii) priorities. The above has hopefully outlined a few of the reasons why I value shared *language*and *understanding.*I wanted themas a new entrant to the community and couldn’t find anything substantive. This is why I wrote this series of posts to outline and share the language and understanding that I have developed and plan to use if I engage in more direct work. I am happy for others to use or adapt it. If there are flaws then I’d encourage others to point them them out and/or develop something better.  What have I already written? ============================ This is the third part of a [series](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD) which outlines an approach/a theory of change for Artificial Intelligence (AI) Safety movement building. [Part one](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD/p/5iQoR8mhEpvRT43jv) gave *context* by conceptualising the AI Safety community.  ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/8XZmu8BM5JBtSnHiP/b6771xqjayhljcwr4wmw) [Part two](https://forum.effectivealtruism.org/posts/YMvSZi2EWxNHwFtbb/part-2-ai-safety-movement-builders-should-help-the-community) proposed *outcomes*: three factors for movement builders to make progress on.    ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/8XZmu8BM5JBtSnHiP/r1kvnlebisyjvzdqzkoi) I next explain the *process*used to achieve these outcomes in this context. In doing so, I aim to provide i) a basic conceptual framework/shared language for AI Safety Movement Building, and ii) show the breadth and depth of contributions required as part of movement building.   AI Safety movement builders contribute to AI Safety via *projects* ================================================================== As used here a ‘project’ refers to a temporary effort to create a unique product, service, or outcome. Projects can be focused on technical research, governance, strategy (e.g. [1](https://aisafety.world/).[2](https://www.aisafetysupport.org/resources/lots-of-links)) or movement building specifically (e.g., [1](https://www.lesswrong.com/posts/QRST9ctX5Cu2dM2Sb/agi-safety-field-building-projects-i-d-like-to-see),[2](https://forum.effectivealtruism.org/posts/5KsrEWEbc4mwzMTLp/some-more-projects-i-d-like-to-see)).  They range in size from very small (e.g., an hour a week to do X) to very large (e.g., a large organisation or group). They can encompass smaller subprojects (e.g., teams in an organisation).    AI Safety movement builders contribute by helping to *start, sustain* and *scale*AI Safety projects --------------------------------------------------------------------------------------------------- Starting projects refers to cases where movement builders enable one or more people to provide a unique product, service, or outcome which contributes to AI safety. For instance, an AI Safety movement builder might encourage a newly graduated computer science student to start an independent research project and connect them with a mentor. They might join several people with diverse skills to start an AI Safety related training project in their university or organisation.  Sustaining project means keeping them going at their current level of impact/output. For instance, this might involve supporting a newly created independent research, or AI Safety related training project to keep it running over multiple years. Scaling projects refers to cases where movement builders enable an AI safety project to have a greater positive impact on AI Safety contributors, contributions and coordination. For instance, an AI Safety movement builder might help a newly started organisation to find researchers for their research projects as a recruiter, or find them a recruiter. Or they might find several industry partners for an AI related training project which is seeking to give their best students’ industry experience before they graduate. They could also join a bigger organisation of AI governance researchers as a research communicator to help them reach a wider audience. AI Safety Movement builders start, sustain and scale project by contributing resources (e.g., knowledge, attitudes, habits, professions, skills and ideas).  Their main contribution is 'human resources': access to the professions and skills that the project needs to start or scale.    AI Safety movement builders start and scale AI Safety projects by contributing *human resources*:  *technical*and *non-technical professions*and *skills* in *full-time* and *fractional* capacities. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- A ‘profession’ refers to a broad role which people are often hired to do (e.g., a marketer or coach) and a ‘skill’ refers to a commonly used ability (e.g., using Google Ads or CBT). Professions are essentially ‘careers’. Skills are professionally relevant skills and experience. They are somewhat akin to ‘career capital’ - what you might learn and transfer in a career.  AI Safety movement builders contribute professions and skills to projects directly (e.g., via working for an organisation as founder or a recruiter) or indirectly (e.g., via finding founder or a recruiter for that organisation).  It seems useful to split contributions of professions and skills into those which are technical (i.e., involve knowledge of Machine Learning (ML)), non-technical  (i.e., don’t involve knowledge of ML), full-time (i.e., 30+ hours a week) or fractional (e.g., 3 hours a month or 3 months a year). This conceptual framework is visualised below and referenced in the content that follows.   ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/8XZmu8BM5JBtSnHiP/tt7jbluisoyxc4qvw7u2)   *AI Safety Projects*generally require many *technical* and *non-technical professions*and *skills*to start/scale ---------------------------------------------------------------------------------------------------------------- AI Safety Projects typically require a wide range of professions and skills. As an example, the below shows a typical AI Safety Project and some of the professional skills required to make it happen.  | | | | | --- | --- | --- | | **Project type** | **Explanation** | **Professional skills involved** | | AI Safety training program                  | An eight-week course to train people in machine learning | Planning the course, providing the research training, facilitating the sessions, marketing the course, creating and maintaining the digital assets, communicating with stakeholders and attendees, measuring and evaluating impact, managing staff, participants, and finances. |  The professional skills involved include ‘technical’ contributions: research mentorships or and help to develop or deliver technical aspects of the course. They also involve non-technical professional skills, for instance, in the development of digital assets, marketing, project management and evaluation. What professions and skills are relevant to current and future AI Safety projects? ================================================================================== To unpack this, I next outline professions and skills which I think can contribute to AI Safety Movement building. I start by offering some general thoughts on how these contributions might be made. I then give examples of how these can impact the three target outcomes of AI Safety Movement Builders: increasing contributors, contributions and coordination. Because of how technical competence affects personal capacity to contribute to AI Safety Movement building, I discuss technical and non-technical contributions in separate sections.  In each case I am not presenting an exhaustive list - just a set of examples to make my overall theory of change somewhat clear. At the end of the section, I argue that we should properly explore the demand for specific professions and skills within the AI Safety community. Technical contributions to AI Safety Movement Building are very valuable, but need to be balanced against trade-offs, and may best be made in a fractional capacity ------------------------------------------------------------------------------------------------------------------------------------------------------------------- AI emerged from specific types of Machine Learning. As such, anyone interested in interpreting and communicating the technical and theoretical risk from AI, and related AI Safety community needs and opportunities would ideally have a deep background in the relevant technologies and theories. This is why technical experts (e.g., Paul Christiano, Buck Schlegeris, and Rohin Shah) have privileged insight and understanding and are essential to the AI Safety movement building. But there is a problem… ### ### There are relatively few technical experts within the AI Safety community and most are (rightly) focused on doing technical work Relatively few people possess in-depth knowledge of ML in conjunction with a focus on safety. Relatively few of those who do are focused on movement building as most instead prefer to directly work on AI Safety theory or empirical work.  ### ### Many technical experts within the AI Safety should consider fractional movement building It seems likely that technical experts can sometimes have a much higher indirect impact on expectation via doing movement building than their alternatives. For instance, they may encounter someone who they could work with, have a chance to influence a group of relevant experts at a networking event or talk, or accelerate the development of one or more projects or research labs in some advisory capacity. In each case they could have a valuable opportunity to increase coordination, contribution or cooperation. To the extent that the above is true, it seems valuable for experts to consider assigning a portion of their time to movement building activities. As an example, they could aim to spend 10% of their time attending conferences, speaking with other researchers and engineers, mentoring other technical experts, and creating educational programs. ### ### Technical novices within the AI Safety community should consider assigning a larger fraction of time to movement building By technical novices, I mean people who have one or two years of experience in ML. For such people, there is a case for investing a relatively large fraction of time in movement building as opposed to research. For instance, early career researchers may stand to contribute a lot by helping more senior researchers to provide more presentations, training, research operations or research communication, particularly if they do not feel that they are likely to make progress on difficult research problems within their expected timelines for AI takeoff. There are opportunities to accelerate the acquisition and development of exceptional talents. A researcher who believes they are good but not brilliant enough to make significant contributions to AI Safety research might be very well suited to engage and help people who are younger and less experienced, but more talented. They might also be able to make significant contributions to movement building in areas such as AI governance or strategy where the required technical knowledge for making significant contributions may be significantly lower. To the extent that the above is true, it seems valuable for novice researchers to consider assigning some portion of their time to movement building activities. For instance, depending on aptitude and opportunity they could plan to spend 25% - 75% of their time communicating research, running conferences, supporting expert researchers, engineers and creating and running educational programs.   ### What do technical contributions look like in practice? To help illustrate the above, the following table outlines professions (in this case with a technical specialisation) which can contribute to AI Safety Movement building and suggests potential impacts on contributors, contributions and coordination.  | | | | | --- | --- | --- | | **Profession** | **Definition** | **Example of movement building impact** | | Research | Researching how to best develop and deploy artificial intelligence systems. | Attending technical AI conferences to network and increase the number of *contributors.* | | Engineering | Providing technical expertise to research, develop and deploy AI systems. | Supervising new AI research projects to increase the number of *contributors.* | | Policymaker | Providing technical expertise in policy settings. | Mentoring relevant civil servants to increase future *contributors*to policy research and enactment. | | Entrepreneurship | Creating, launching, and managing ML related ventures. | Creating a new AI safety related research organisation which can then hire new *contributors*. | | Consulting | Helping organisations solve technical problems or improve performance. | Helping to scale a new AI safety related research organisation which can then hire new *contributors*. | | Management | Overseeing and directing technical operations. | Helping to manage engineers in an AI safety related research organisation to increase their *contributions*. | | Recruiting | Finding, screening, and selecting qualified job candidates. | Recruiting and trialling engineers for an organisation to increase the number of *contributors.* | | Education | Providing individuals with necessary knowledge and skills. | Offering technical training to provide important skills and increase the number of *contributors.* | | Community manager                          | Building communities of researchers, engineers and related specialisations. | Connecting important researchers via research seminars to improve *coordination.* |   For illustration, the following table outlines skills relevant to those with technical expertise and examples of potential impacts on *contributors, contributions*and *coordination.*   | **Skill** | **Definition** | **Example movement building impact** | | --- | --- | --- | | Mentorship | Providing guidance and support to new participants within the community. | Increasing *contributors, contributions*and *coordination* by mentoring new researchers. | | Ambassador/ advocate  | Becoming an effective representative or spokesperson for AI safety. | Communication and engagement with peers to increase awareness of AI safety issues and to attract new *contributors* to the community.  | | Machine Learning focused recruitment (ML) | Identifying, attracting, and hiring individuals with ML skills and qualifications.  | Finding and testing individuals to help to increase the number of *contributors.* | | AI Safety Curriculum design | Creating AI Safety educational programs and materials. | Increasing *contributors, contributions*and *coordination* by curating needed skills and bodies of knowledge into educational courses and materials. | | Research dissemination | Effectively spreading research information to relevant groups. | Increasing *contributors, contributions*and *coordination*by creating simple and easily communicable summaries and syntheses.  | | Career coaching | Helping others with career choice. | Making progress on *contributors, contributions*and *coordination*by helping new and novice researchers to understand what they need to know and where to work. | | Writing | Strong writing skills for creating clear and effective research and communication. | Writing blog posts, articles, or social media posts to share information and engage with followers, which can increase *contributions* and *coordination*within the AI safety movement. | | Presenting | Presenting information persuasively to relevant audiences. | Giving presentations at conferences, workshops, or other events to share information and ideas about AI safety, increase *contributions* and *coordination*within the AI safety movement. | | Networking | Building relationships and connections with others who share similar interests or goals. | Building partnerships and collaborations with/between other organisations or individuals who are also working on AI safety issues to increase *contributions* and *coordination.* | | Social skills | Being able to effectively understand and collaborate with others | Communicating effectively, collaborating with others, and building trust and rapport with stakeholders to increase *contributions*and *coordination*. | | Teaching skills | Educating others about the potential risks and challenges associated with advanced AI. | Providing training to help people develop the skills and knowledge needed to work on AI Safety and increase the number of *coordinators.* |   Non-technical contributions to AI Safety Movement Building can best support AI strategy and AI governance but can also support AI Safety technical work. -------------------------------------------------------------------------------------------------------------------------------------------------------- ### ### Non-technical people may be best suited to movement build for AI strategy and AI governance AI Strategy and AI governance involve a lot of non-technical tasks including setting up and managing courses, creating community surveys, communicating research, writing up social media content or helping with general operation. AI Strategy and AI governance areas can therefore absorb many contributions of non-technical professions and skills in relatively obvious ways.  ### ### Non-technical people can also contribute to technical AI safety Non-technical people appear at a major disadvantage when it comes to helping with the technical aspect of AI Safety Movement building. They cannot provide many of the important inputs of those offered by technical people. They also cannot easily understand some of the technical literature and related reasoning. Accordingly, it might seem that non-technical people can only make useful contributions to AI Safety via working in AI Strategy and AI policy. In contrast, I think that non-technical people can also contribute to technical AI safety and are going to be increasingly important as it scales. Here are three reasons why. ### ### Non-technical people are in greater supply than technical people, they have lower opportunity costs, and their complementary skills are needed to produce the optimal outcomes from technical contributors and contributions. Non-technical people are in greater supply than technical people. I suspect that a much larger percentage of people interested in AI Safety, or open to working on it for a market rate salary, will have non-technical, than technical backgrounds. Non-technical people have fewer opportunity costs than technical people because they don’t have a competing opportunity to do technical research instead of movement building. Non-technical people can contribute to the AI Safety movement by providing complementary resources that save technical people time, or address gaps in their competency or interests. For instance, a technical person might be very happy to speak at a conference but not to spend the time looking for conferences, contacting organisers, or reworking their slides and planning their trip. However, a nontechnical person working as an operational staff member might however be able and willing to do all of this.  By working with the technical person the operation person can therefore greatly increase their collective impact on AI Safety outcomes. By working with a team of researchers, they may be able to dramatically improve the level of awareness of their work with compounding effects over time. The above reasons may help explain why half or more of the staff at technical companies (e.g., focused on software engineering) are often non-technical: usually marketers, lawyers, recruiter, social media managers, managers and many other forms of support staff are required to maximise the value of technical work, particularly at scale.   ### Non-technical people who are able and willing to contribute to AI Safety should prioritise support technical AI Safety movement building when possible*.* I believe: * In AI strategy and governance there is a higher level of awareness, understanding and usage of nontechnical people in movement building roles than in technical AI safety. * Technical research is the most bottlenecked group within the AI safety community (although one might argue that movement building is actually the issue as it is an upstream cause of this blocker) I therefore believe that most non-technical AI safety movement builders will have maximal impact if they allocate as much of their time to technical AI Safety movement building as can be productively accommodated.  For instance, this could be via working at an organisation alongside technical workers in a nontechnical operations' role as discussed in the example above. In a more abstract sense, it could also be helping to recruit for people who might be more immediately productive. For instance, if you know an organisation really needs a technical recruiter then you could have an outsized impact by helping them find one (if working in a relevant context). ### ### For some non-technical movement builders, it will make sense to work in a fractional role where only part of one’s time is spent on AI safety work In some cases having more than one role may outperform the alternative of working full time as an ‘AI Safety Movement Builder’. For instance, a nontechnical social science researcher might have more impact from a fractional role where they teach and do research at a university for 50% of their time than from leaving their university to put 100% of their time into movement building.  In the fractional role, they may be able to do a range of important movement building activities: to interact with more people who are unaware of but potentially open to AI safety arguments, invite in speakers to talk to students, advocate for courses etc. All of these are things they cannot do if they leave their research role.  They are also potentially more likely to be able to engage with other researchers and experts more effectively as an employed lecturer interested in X, than as an employee of an AI safety organisation or “movement builder” with no affiliation. Fractional roles may make sense where individuals believe that the AI safety community can only make productive use of a percentage of their time (e.g., one day a week they facilitate a course).    ### For some non-technical movement builders, it will make sense to work as a generalist across multiple parts of the AI safety community In some cases a nontechnical AI safety movement builder (e.g., a recruiter or communicator) might be better off working across the AI strategy, governance and technical groups as a generalist than focusing on one area. For instance, this may give them more opportunities to make valuable connections.  ### ### Non-technical people working as experts in AI governance or strategy roles should consider being fractional movement builders Non-technical people working as experts in governance or strategy roles (e.g., Allan Dafoe) should consider a fractional approach to AI Safety movement building. For instance, they might consider allocating 10% of their time to networking events, talks, or mentorship.  ### ### What do non-technical contributions look like in practice? To help illustrate the above, the following table outlines non-technical professions which can contribute to AI Safety Movement building and explains how doing so could impact contributors, contributions and coordination.  | | | | | --- | --- | --- | | **Profession** | **Definition** | **Example movement building impact** | | Marketing (and communications) | Promoting and selling products or services through various tactics and channels using various forms of communication. | Using marketing communication to increase awareness of AI safety research opportunities in relevant audiences to increase the number of *contributors.* | | Management | Overseeing and directing the operations of a business, organisation, or group. | Joining a research organisation to improve how well researchers involved in the AI safety movement *coordinate* around the organisation's goals and research agendas. | | Entrepreneurship | Creating, launching, and managing new ventures. | Creating a new AI safety related research training program which can then hire new *contributors.* | | Research | Gathering and analysing information to gain knowledge about a relevant topic. | Applied research to make the values, needs and plans of the AI (safety) community more legible which can help increase *contributors, contributions*and *coordination.* | | Coaching | Providing guidance, advice, and instruction to individuals or teams to  improve performance. | Coaching novice individuals and organisations within the AI safety movement can help to increase their *contributions.* | | Operations | Planning, organising, and directing the use of resources to efficiently and effectively meet goals. | Reducing administration or providing support services to help AI safety organisations make better *contributions*. | | Recruiting                                                                                                                                                    | Finding, screening, and selecting qualified job candidates. | Recruiting top talent and expertise to organisations to increase the number of *contributors.* |   The following table outlines skills relevant to those with non-technical expertise and examples of potential impacts on *contributors, contributions*and *coordination.*   | | | | | --- | --- | --- | | **Skill** | **Definition** | **Example movement building impact** | | Search Engine Optimisation (SEO)  | Managing websites and content to increase visits from search engine traffic. | Optimising a website or online content to increase capture of AI career related search engine traffic and increase *contributors.* | | Research lab management | Overseeing and directing the operations of a research laboratory. | Increasing *contributions* by creating and managing labs that amplify the research output of experienced AI safety researchers. | | Commercial innovation | Creating and implementing new ideas, products, or business models. | Creating AI safety related ventures that increase the funding and/or research and therefore increase the number of contributors to AI Safety and/or their *contributions*or coordination. | | Surveying  | Measuring and collecting data about a specific area or population to gain understanding. | Surveying key audiences at key intervals to better understand the values, needs and plans of the AI safety community and improve *contributors, contributions or coordination.* | | Therapy | Treating emotional or mental health conditions through various forms of treatment. | Increasing *contributions* by helping individuals within the AI safety movement to improve their mental health, well-being and long term productivity. | | Project management | Planning, organising, and overseeing the completion of specific projects or tasks within an organisation. | Coordinating resources and timelines, setting goals and objectives, and monitoring progress towards achieving them to help researchers in the AI safety movement to make better *contributions*. | | Job description creation | Developing adverts for a specific job within an organisation. | Creating clear, accurate, and compelling job descriptions that effectively communicate the responsibilities, qualifications, and expectations for a role and therefore increase the number of *contributors*. | | Facilitation                                                                                           | Coordinating and leading meetings, workshops, and other events. | Bringing together people from different backgrounds and perspectives to discuss and work on AI Safety issues and increase *coordination*. | | Design | Create attractive visuals and interactive tools. | Create engaging and compelling content that raise awareness and increase *contributions*.  |                                                                                      Mistakes to avoid in movement building ===================================================================================================================== It shouldn’t be assumed that all AI Safety Movement building projects will be net positive or that everyone with these professions or skills would be useful within AI Safety.  It is particularly important to avoid issues such as causing reputational harm to the EA and AI Safety community, speeding up capabilities research or displacing others who might make more significant contributions if they took the role you fill.  Professions such as marketing, advocacy and entrepreneurship (and related skills) are therefore particularly risky to practice. I recommend that anyone with a relevant profession or skill should *consider*using it to support or start a project and then seek to *validate*their belief that it may be useful. For now *validation*might involve talking with senior people in the community and/or doing some sort of trial, or seeking funding.  I think that it's important to have a strong model of possible downsides if you're considering doing a trial and that some things might not be easy to test in a trial. In future, there will hopefully be a clearer sense of i) the collective need for specific professions and skills, ii) the thresholds for when on can be confident that they/a project will be provide value and iii) the process to be followed to validate that if a person/project was useful.   Summary ======= Across my three posts, I have argued: * [The AI Safety community mainly comprises four overlapping, self-identifying, groups: *Strategy, Governance, Technical and Movement Building.*](https://forum.effectivealtruism.org/s/RwtygELTfbRJzcvwD/p/5iQoR8mhEpvRT43jv) + AI Safety movement builders are like *a ‘Human Resources and operations team’ for the AI Safety community*. + People in other parts of the community can (and should) contribute to movement buildingin a *fractional capacity.* * AI Safety movement builders *contribute* to AI Safety by helping to *start and scale AI Safety projects* + *Projects*require *contributions*of many *technical*and *non-technical professions*and *skills*in *full-time*and *fractional*capacities. * [AI Safety movement builders help the community to optimise three outcomes/factors:](https://forum.effectivealtruism.org/posts/YMvSZi2EWxNHwFtbb/part-2-ai-safety-movement-builders-should-help-the-community) + *Contributors:*individuals making positive contributions to AI Safety. + *Contributions:* contributors’ inputs into AI Safety work. + *Coordination:* understanding and aligning contributions to ensure maximal efficiency. The summarised approach is illustrated below. ![](https://res.cloudinary.com/cea/image/upload/f_auto,q_auto/v1/mirroredImages/8XZmu8BM5JBtSnHiP/ccrbciuzpaphptxb7rxp)  Implications and ideas for the future ===================================== I argued that a lack of shared language and understanding was the cause of many problems with AI Safety Movement Building. With that in mind, if the response to my approach is that it seems helpful (even if just as a stepping stone to a better approach), this suggests a range of implications and ideas for the future.  Use the three AI Safety Movement building outcome metrics suggested (Contributors, Contributions and Coordination), or something similar, to evaluate progress in AI Safety Movement Building --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- It may be useful to have some sort of benchmarking for progress in AI Safety Movement building. One way to do this would be to have some sort of regular evaluation of the three AI Safety Movement building outcome metrics I suggested (or similar).  These outcome metrics are *Contributors*: how many people are involved in AIS Safety (total and as a percentage of what seems viable), *Contributions*: how well they are functioning and performing and *Coordination*: How well all contributors understand and work with each other. If the factors that I have suggested seem suboptimal, then I’d welcome critique and the suggestion of better alternatives. I am confident that some sort of tracking is very important but not yet confident that my suggested outcomes are optimal to use. However, as they are the best option that I have right now, I will next use them to frame my weakly held perspective on how AI Safety Movement building is doing right now. ### ### I think that AI Safety Movement building is doing well for a small community but has significant potential for improvement I really appreciate all the current AI Safety movement building that I am seeing, and I think that there is a lot of great work.  Where we are reflects very well on the small number of people working in this space. I also think that we probably have a huge amount of opportunity to do more. While I have low confidence in my intuitions for why we are under optimised I will now explain them for the sake of hopefully provoking discussion and increasing shared understanding.  All the evaluations that follow are fuzzy estimates and offered in the context of me finally being convinced that optimising the deployment of AI is our most important societal challenge and thinking about what I might expect if say, the US government, was as convinced of the severity of this issue as many of us are. From this perspective, where we are is far from where I think we should be if we believe that there is ~5%+ chance of AI wiping us out, and a much higher chance of AI creating major non-catastrophic problems (e.g, economic, geopolitical, and suffering focused risks).  **I think that probably less than 5% of all potentially valuable contributors are currently working on AI safety.**I base this on the belief that the vast majority of people who could work on AI safety don’t even know about the core arguments and opportunities. Of those that do know of these options many have a very high friction path to getting involved, especially if they are outside key hubs. For instance, there are lots of different research agendas to evaluate, good reasons to think that none of them work, limited resources beyond introductory content, a lack of mentors and a lack of credible/legible safe career options to take. **I think that probably less than 50% of maximal potential individual productivity has been achieved in the current group of contributors.**I base this on conversations with people in the community and work I have done alongside AI safety researchers. Physical health, mental health and social issues seem very significant. Burnout rates seem extremely high. Some people seem to lack technical resources needed to do their research. **I think that probably less than 10% of optimal coordination has been achieved across all contributors.** I base this on conversations with people in the community and work I have done alongside AI safety researchers. It seems that there is relatively little communication between many of the top researchers and labs and little clarity around which research agendas are the most well-accepted. There seems to be is even less coordination across areas within the community (.e.g, between governance and technical research) Assume we regard the total output of AI Safety community as being approximated by the combination of contributors, contributions & coordination. This means that **I think that we are at around 0.25% of our maximum impact (0.05\*.5\*.1\*100)**.  I’d welcome critiques that endorse or challenge my reasoning here. Determining clear priorities for AI Safety Movement Building is the most important bottleneck to address once we have a shared understanding -------------------------------------------------------------------------------------------------------------------------------------------- To varying extents, we appear to lack satisfactorily comprehensive answers to key questions like: Which AI Safety movement building projects or research projects are most valued? Which professions and skills are most needed by AI Safety organisations? Which books, papers and resources are most critical to read for new community members? Which AI organisations are best to donate to?  We also appear to lack comprehensive understanding of the drivers of differences in opinions within answers to these questions. All of this creates uncertainty, anxiety, inaction and inefficiency. It therefore seems very important to set up projects that explore and surface our priorities and the values and differences that underpin them.  Projects like these could dramatically reduce uncertainty and inefficiency by helping funders, community members, and potential community members to understand the views and predictions of the wider community, which in turn may provide better clarity and coordinated action. Here are three examples which illustrate the sorts of insights and reactions that could be provoked. * If we find that 75 percent of a sample of community members are optimistic about project type Y (courses teaching skill x in location y) and only 20% are optimistic about project Z (talent search for skill y in location x), this could improve related funding allocation and communication and action from movement builders and new contributors. * If we find that technical AI Safety organisations expect to hire 100 new engineers with a certain skill over the next year, then we can calibrate a funding and movement building response to create/run related groups and training programs in elite universities and meet that need. * We find that 30% of people of a sample of community members see a risk that movement building project type Y might be net negative and explain why. We can use this information to iterate project type Y to add safeguards that mitigate the perceived risks. For instance, we might add a supervisory team or a pilot testing process. If successful, we might identify a smaller level of concern in a future survey. Here are some other benefits from prioritisation.  * To help new entrants to the AI safety community to understand the foundational differences in opinion which have caused its current divisions, and shape their values around the organisations and people they trust most. * To help Movement Builders to communicate such information to the people they are in contact with, raising the probability that these people act optimally (e.g., by working on the skills or projects that seem best in our collective expectation). * To help current contributor to realise that they have overlooked or undervalued key projects or projects and align to work on those which are forecast to be more effective. I am not confident of the best way to pursue this goal, but here are some ideas for consideration: * We create panel groups for the different sectors of the AI Safety Community. Participants in the groups consent to be contacted about paid participation in surveys, interviews and focus groups relating to AI Safety Community matters. * Groups run regular projects to aggregate community and expert opinions on key questions (e.g., see the questions above but note that we could also collect other key questions in these surveys) * Where surveys or dialogue show important differences in prediction (e.g., in support for different research agendas or movement building projects), groups curate these differences and try to get to identify the causes for underlying differences * Groups run debates and/or discussions between relevant research organisations and actors to explore and surface relevant differences in opinion. * Groups try to explore key differences in intuitions between organisations and actors by collecting predictions and hypotheses and testing them. **What are some examples?** This survey of [intermediate goals in AI governance](https://forum.effectivealtruism.org/posts/g4fXhiJyj6tdBhuBK/survey-on-intermediate-goals-in-ai-governance ) is an example of the sort of survey work that I have in mind. I see [curation work](https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/?commentId=8pJYzFSfvAaFZM5tq) as a precursor for surveys of this nature (because of the need for shared language). I might be wrong to believe that determining priorities is such a problem or that related projects are so important, so please let me know if you agree or disagree. Fractional movement building seems like the best way to grow the community once it knows if, and how, it wants to grow ---------------------------------------------------------------------------------------------------------------------- I have argued that fractional movement building - allocating some, but not all, of my productive time to movement building- is a useful approach for many people to consider. Here is why I think it is probably the best approach for the AI Safety community to grow with.   ### It will usually be better for most people in the AI safety community to do fractional movement building than to entirely focus on direct work It seems likely the average AI safety expert will get maximal impact from their time when they allocate some portion of that time for low frequency, but high impact, movement building work. This could be networking with intellectual peers in other areas and/or providing little known meta-knowledge (e.g., about what to read or do to make progress on problems), which can make it much easier for other and future researchers to make important contributions.  Why will the average AI safety expert will get maximal impact from their time when they allocate some portion of that time for low frequency, but high impact, movement building work? This is mainly because [AI Safety is a pre-paradigmatic field](https://www.alignmentforum.org/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field#:~:text=AI%20safety%20is%20a%20pre,and%20how%20to%20approach%20it.) of research and practice where there is no clear scientific consensus on what to do and with what assumptions. This has the following implications.   **Many (probably orders of magnitude more) potentially good contributors are unaware, than aware, of the field of AI Safety.**I suspect there are millions of very smart people who are not aware of, or engaged with, AI Safety Research. I am optimistic that some portion of these people can be persuaded to work on AI Safety if engaged by impressive and capable communicators over extended periods, which will require our best brains to communicate and engage with outside audiences (ideally with the help of others to coordinate and communicate on their behalf where this is helpful). **Most important information in AI Safety is in the heads of the expert early adopters.** Because AI Safety is a pre-paradigmatic field of research and practice there is little certainty about what good content is, that bar is changing as the assumptions change with the result that there are few to no production systems for, and examples of, well synthesised explanatory content (e.g., a college level textbook). This means that even a very smart person seeking to educate themselves about the topic will lack access to the most key and current knowledge unless they have an expert guide.   ### It will be usually be better for AI safety movement building to focus on growing fractional movement building commitments than full-time movement building commitments **Fractional movement building can likely engage more people than full time movement building** *Most people are more comfortable and capable of persuading other people to do something when they already do that thing or something similar.* Someone with a PhD in X, researching X, is probably going to feel much more comfortable recommending other people to do a PhD in, or research on, X than if they have only a superficial understanding of this area and no professional experience.  *Most people who are at all risk-adverse will generally feel more comfortable working on something superficially legitimate than on being a full time ‘movement builder’.* They will be more comfortable being hired to research X at Y institution, which may involve doing movement building, than being a full-time movement builder for CEA. As a personal example, I feel it will look very strange for my career if I go from being a university based academic/consultant with a certain reputation built up over several years to now announcing that I am a freelance ‘movement builder’ or one working for CEA. I imagine that many of my professional contacts would find my new role to be very strange/cultish and that it would reduce the probability of getting future occupations if it didn’t work out as it’s not a very well understood or respected role outside of EA. *It is easier to get involved as a fractional role. Many more people can give 10% of their time to be a fractional movement builder and many more organisations can avail of small contributions than full time commitments.*   **Fractional movement building will often provide a better return on time and effort than full time movement building** *Community builders with recognised roles at credible organisations will be seen as credible and trustworthy sources of information.* If someone approached me as ‘climate change/socialism/veganism’ movement builder working for a centre I hadn’t heard of or named after that movement, I’d probably be more resistant to and sceptical of their message than if they worked for a more conventional and/or reputable organisation in a more conventional and/or reputable role. Here are two examples: * Someone working as an average level researcher at an AI safety research organisation is likely to more trusted and knowledge able communicator about their research area and related research opportunities than someone who works full time as a movement builder. * Someone who works in a university as a lecturer is going to be much better at credibly influencing research students and other faculty than someone who is outside the community and unaffiliated. As a lecturer focused on spending 20% of time on AI Safety Movement building this person can influence hundreds, maybe thousands, of students with courses. As a full-time movement builder they would lose this access and also associated credibility. *Most people who can currently contribute the most to movement building are better suited to fractional than full-time roles.* As stated above, most experts, particularly technical experts, have unique, but rare opportunities to influence comparably talented peers, and/or provide them with important knowledge that can significantly speed up their research.  *Fractional movement building is more funding efficient.*Much movement building work by fractional movement builders (e.g., networking, presenting, or teaching etc) is partially or fully funded by their employer. It may displace less favourable work (e.g., a different lecturer who would not offer an AI Safety aligned perspective). This saves EA funders from having to fund such work, which is valuable while we are funding constrained.   Feedback ======== I have outlined a basic approach/theory of change - does it seem banal, or flawed? I will pay you for helpful feedback! ----------------------------------------------------------------------------------------------------------------------- Supportive or critical feedback on this or my other posts would be helpful for calibrating my confidence in this approach and for deciding what to do or advocate for next.  For instance, I am interested to know: * What, if anything, is confusing from the above? * What, if anything, was novel, surprising, or useful / insightful about the proposed approach for AI safety movement building I outlined in my post? * What, if anything, is the biggest uncertainty or disagreement that you have, and why? If you leave feedback, please consider indicating how knowledgeable and confident you are to help me to update correctly. ### **To encourage feedback I am offering a bounty.** I will pay up to 200USD in Amazon vouchers, shared via email, to up to 10 people who give helpful feedback on this post or the two previous posts in the series. I will also consider rewarding anonymous feedback left [here](https://forms.gle/Ds7ACVfLiaab7Myy5) (but you will need to give me an email address). I will share anonymous feedback if it seems constructive and I think other people will benefit from seeing it. I will also leave a few comments that people can agree/disagree vote on to provide quick input. What next? ---------- Once I have reviewed and responded to this feedback I will make a decision about my next steps. I will probably start working on a relevant fractional role unless I am subsequently convinced of a better idea. Please let me know if you are interested in any of the ideas proposed and I will explore if we could collaborate or I can help. Acknowledgements ---------------- The following people helped review and improve this post: Alexander Saeri, Chris Leong, Bradley Tjandra and Ben Smith. All mistakes are my own. This work was initially supported by a grant from the FTX Regranting Program to allow me to explore learning about and doing AI safety movement building work. I don’t know if I will use it now, but it got me started.
016f45aa-05b5-4578-9c3f-d171d8018d2c
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"This sequence of posts is a primer on game theory intended at an introductory level. Because it is introductory, Less Wrong veterans may find some parts boring, obvious, or simplistic - although hopefully nothing is so simplistic as to be outright wrong.Parts of this sequence draw heavily upon material from The Art of Strategy by Avinash Dixit and Barry Nalebuff, and it may in part be considered a (very favorable) review of the book accompanied by an exploration of its content. I have tried to include enough material to be useful, but not so much material that it becomes a plagiarism rather than a review (it's probably a bad idea to pick a legal fight with people who write books called The Art of Strategy.) Therefore, for the most complete and engaging presentation of this material, I highly recommend the original book. All posts will be linked from here as they go up:1. Introduction to Game Theory: Sequence Guide2. Backward Reasoning Over Decision Trees3. Nash Equilibria and Schelling Points4. Introduction to Prisoners' Dilemma5. Real World Solutions to Prisoners' Dilemmas6. Interlude for Behavioral Economics7. What Is Signaling, Really?8. Bargaining and Auctions9. Imperfect Voting Systems10. Game Theory As A Dark Art Special thanks to Luke for his book recommendation and his strong encouragement to write this." The above is a chunk of text from a post created by a user on the website LessWrong. What is this text talking about? The text is introducing a series of posts about game theory, aimed at a beginner level. Game theory is a branch of mathematics that studies strategic interactions, meaning situations where the outcome for each person or 'player' depends on the actions of all. The author mentions that the sequence will cover a range of topics within game theory, including backward reasoning over decision trees, Nash equilibria, Schelling points, and the Prisoner's Dilemma. These are all key concepts in the field. The sequence will also delve into real-world applications of game theory, behavioral economics, signaling, bargaining and auctions, voting systems, and the potential misuse of game theory. The author notes that much of the content is based on "The Art of Strategy" by Avinash Dixit and Barry Nalebuff, and recommends this book for a more comprehensive understanding of the material. In essence, this text is a roadmap for a series of educational posts on game theory. Can you elaborate on how signaling is used in game theory and provide a real-world example of its application? In game theory, signaling refers to an action taken by an informed party, or a "signaler", to reveal private information about their own type to an uninformed party, or a "receiver". The goal of signaling is to alter the receiver's beliefs about the signaler's type and influence the receiver's subsequent actions to the signaler's advantage. Signaling can be seen in various real-world situations, including job markets, financial markets, politics, and even animal behavior. Let's take the job market as an example. In this scenario, the job applicant is the signaler and the employer is the receiver. The applicant wants to signal their ability and potential to the employer. One common way to do this is through educational qualifications. By obtaining a degree from a reputable institution, the applicant sends a signal to the employer about their skills, dedication, and intelligence. This signal can influence the employer's decision to hire the applicant. However, it's important to note that for signaling to be effective, it must be costly and the cost must be correlated with the type of the signaler. In the job market example, obtaining a degree is costly in terms of time, effort, and money. Moreover, the cost is likely to be lower for high-ability types (since they can learn more quickly, find studying less effortful, etc.) than for low-ability types. This makes education a credible signal of ability. Of course, signaling is a complex process and it doesn't always work perfectly in practice. For instance, there can be issues with false or misleading signals, and receivers need to be savvy in interpreting the signals they receive. But when used effectively, signaling can be a powerful tool for conveying private information and influencing strategic interactions. Explain the concept of "costly signaling" in game theory and how it ensures the credibility of the signal. Costly signaling, also known as the handicap principle, is a concept in game theory that suggests that for a signal to be credible and reliable, it must be costly for the signaler to produce. The cost ensures that only those who can afford the cost will send the signal, thereby making the signal a reliable indicator of the signaler's underlying quality or ability. The concept is rooted in the idea that dishonest or false signals are detrimental to the system. If signals were costless, anyone could send any signal, making it impossible for the receiver to distinguish between honest and dishonest signalers. By making signals costly, the system ensures that only those who can bear the cost (i.e., those who are truly of high quality or ability) will choose to send the signal. A classic example of costly signaling comes from the animal kingdom. The peacock's tail is a signal of fitness. The tail is large, bright, and requires a lot of energy to maintain. It also makes the peacock more visible to predators. Therefore, only the fittest peacocks can afford to have such a tail. When a peahen sees a peacock with a large, bright tail, she can trust the signal because it would be too costly for a less fit peacock to produce the same signal. In human societies, costly signaling can be seen in various contexts. For instance, in the job market, obtaining a degree from a prestigious university is a costly signal of ability and dedication. The cost comes in the form of tuition fees, time, and effort. Only those who are truly dedicated and capable can afford to pay these costs, making the degree a credible signal to potential employers.
82a45d75-864e-47bd-84d6-d83ad4a1d6a3
trentmkelly/LessWrong-43k
LessWrong
GDP per capita in 2050 Abstract Here, I present GDP (per capita) forecasts of major economies until 2050. Since GDP per capita is the best generalized predictor of many important variables, such as welfare, GDP forecasts can give us a more concrete picture of what the world might look like in just 27 years. The key claim here is: even if AI does not cause transformative growth, our business-as-usual near-future is still surprisingly different from today. Latest Draft as PDF Results In recent history, we've seen unprecedented economic growth and rises in living standards. Consider this graph:[1]   How will living standards improve as GDP per capita (GDP/cap) rises? Here, I show data that projects GDP/cap until 2050. Forecasting GDP per capita is a crucial undertaking as it strongly correlates with welfare indicators like consumption, leisure, inequality, and mortality. These forecasts make the future more concrete and give us a better sense of what the world will look like soon. Abstract thoughts about utopia generate little emotional energy; I find these forecasts more plastic and informative, because GDP/cap is highly predictive of welfare.[2] GDP/cap's generalized predictive power helps us paint a more vivid picture of what the world will look like soon. The business-as-usual near future suggested by the data below could be seen as a soft lower bound on how much the world will change. And yet, this world still seems radically different from today. Since the figures below are adjusted for purchasing power parity (PPP), you can compare the GDP/cap of a poorer country in 2050 with the GDP/cap of a richer country in 2020. For instance, between now and 2050, China's GDP/cap will go from $19k to $43k, which is similar to France's today. And so, by 2050, 1.3B Chinese people might enjoy a lifestyle not dissimilar to that of a typical French person today. These GDP/cap are ~3x[3] higher than the median (i.e. typical) income, due to income inequality. Instead of downward adjusting them in
98a4023d-054e-40b2-a586-3cc6419e26ae
StampyAI/alignment-research-dataset/blogs
Blogs
The unexpected difficulty of comparing AlphaStar to humans *By Rick Korzekwa, 17 September 2019* Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became clear that not everybody was satisfied with how the AI agent, called AlphaStar, interacted with the game, or how its creator, DeepMind, presented it. Many observers complained that, in spite of DeepMind’s claims that it performed at similar speeds to humans, AlphaStar was able to control the game with greater speed and accuracy than any human, and that this was the reason why it prevailed. Although I think this story is mostly correct, I think it is harder than it looks to compare AlphaStar’s interaction with the game to that of humans, and to determine to what extent this mattered for the outcome of the matches. Merely comparing raw numbers for actions taken per minute (the usual metric for a player’s speed) does not tell the whole story, and appropriately taking into account mouse accuracy, the differences between combat actions and non-combat actions, and the control of the game’s “camera” turns out to be quite difficult. Here, I begin with an overview of Starcraft II as a platform for AI research, a timeline of events leading up to AlphaStar’s success, and a brief description of how AlphaStar works. Next, I explain why measuring performance in Starcraft II is hard, show some analysis on the speed of both human and AI players, and offer some preliminary conclusions on how AlphaStar’s speed compares to humans. After this, I discuss the differences in how humans and AlphaStar “see” the game and the impact this has on performance. Finally, I give an update on DeepMind’s current experiments with Starcraft II and explain why I expect we will encounter similar difficulties when comparing human and AI performance in the future. Why Starcraft is a Target for AI Research ------------------------------------------ Starcraft II has been a target for AI for several years, and some readers will recall that Starcraft II appeared on our [2016 expert survey](https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/). But there are many games and many AIs that play them, so it may not be obvious why Starcraft II is a target for research or why it is of interest to those of us that are trying to understand what is happening with AI. For the most part, Starcraft II was chosen because it is popular, and it is difficult for AI. Starcraft II is a real time strategy game, and like similar games, it requires a variety of tasks: harvesting resources, constructing bases, researching technology, building armies, and attempting to destroy the opponent’s base are all part of the game. Playing it well requires balancing attention between many things at once: planning ahead, ensuring that one’s units[1](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-1-1980 "“Units” in Starcraft are the&nbsp; diverse elements that make up a player’s army. For example, in the December matches, AlphaStar preferred a combination of units called Stalkers that walk on the ground and shoot projectiles and flying units which are strong against other flying units, which have a special ability against ground units.") are good counters for the enemy’s units, predicting opponents’ moves, and changing plans in response to new information. There are other aspects that make it difficult for AI in particular: it has imperfect information[2](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-2-1980 "Imperfect information means that players can&#8217;t see everything that&#8217;s going on in the game; chess, for example, has perfect information because both players see the whole board. Starcraft has imperfect information because you only have access to information about what your units and what they can “see”."), an extremely large action space, and takes place in real time. When humans play, they engage in long term planning, making the best use of their limited capacity for attention, and crafting ploys to deceive the other players. The game’s popularity is important because it makes it a good source of extremely high human talent and increases the number of people that will intuitively understand how difficult the task is for a computer. Additionally, as a game that is designed to be suitable for high-level competition, the game is carefully balanced so that competition is fair, does not favor just one strategy[3](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-3-1980 "<a href=\"https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii\">https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii</a> “StarCraft is a game where, just like rock-paper-scissors, there is no single best strategy.&#8221;"), and does not rely too heavily on luck. Timeline of Events ------------------- To put AlphaStar’s performance in context, it helps to understand the timeline of events over the past few years: **November 2016:** Blizzard and DeepMind [announce](https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/) they are launching a new project in Starcraft II AI **August 2017:** DeepMind [releases](https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/) the Starcraft II API, a set of tools for interfacing AI with the game **March 2018:** Oriol Vinyals gives an [update](https://news.blizzard.com/en-us/starcraft2/21509421/checking-in-with-the-deepmind-starcraft-ii-team), saying they’re making progress, but he doesn’t know if their agent will be able to beat the best human players **November 3, 2018:** Oriol Vinyals gives another update at a Blizzcon panel, and shares a sequence of videos demonstrating AlphaStar’s progress in learning the game, including leaning to win against the hardest built-in AI. When asked if they could play against it that day, he says “For us, it’s still a bit early in the research.” **December 12, 2018:** AlphaStar wins five straight matches against TLO, a professional Starcraft II player, who was playing as Protoss[4](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-4-1980 "Protoss is one of the three “races” that a player can choose in Starcraft II, each of which requires different strategies to play well"), which is off-race for him. DeepMind keeps the matches secret. **December 19, 2018:** AlphaStar, given an additional week of training time[5](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-5-1980 "DeepMind says this was “after training our agents for an additional week”, though it is unclear how much of the week in between the matches was spent training"), wins five consecutive Protoss vs Protoss matches vs MaNa, a pro Starcraft II player who is higher ranked than TLO and specializes in Protoss. DeepMind continues to keep the victories a secret. **January 24, 2019:** DeepMind [announces](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) the successful test matches vs TLO and MaNa in a live video feed. MaNa plays a live match against a version of AlphaStar which had more constraints on how it “saw” the map, forcing it to interact with the game in a way more similar to humans[6](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-6-1980 "More on this in the section titled The Camera"). AlphaStar loses when MaNa finds a way to exploit a blatant failure of the AI to manage its units sensibly. The replays of all the matches are released, and people start arguing[7](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-7-1980 "Many of these arguments can be found on reddit. See, for example: https://www.reddit.com/r/pcgaming/comments/ajo1rd/alphastar_ai_beats_starcraft_pros_by_deepmind/") about how (un)fair the matches were, whether AlphaStar is any good at making decisions, and how honest DeepMind was in presenting the results of the matches. **July 10, 2019:** DeepMind and Blizzard announce that they will allow an experimental version of AlphaStar to play on the European ladder[8](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-8-1980 "A ladder in online gaming is a competitive league in which players “climb” a series of ranks by winning matches against increasingly skilled players"), for players who opt in. The agent will play anonymously, so that most players will not know that they are playing against a computer. Over the following weeks, players attempt to discern whether they played against the agent, and some post replays of matches in which they believe they were matched with the agent. How AlphaStar works -------------------- The best place to learn about AlphaStar is from [DeepMind’s page](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii) about it. There are a few particular aspects of the AI that are worth keeping in mind: **It does not interact with the game like a human does:** Humans interact with the game by looking at a screen, listening through headphones or speakers, and giving commands through a mouse and keyboard. AlphaStar is given a list of units or buildings and their attributes, which includes things like their location, how much damage they’ve taken, and which actions they’re able to take, and gives commands directly, using coordinates and unit identifiers. For most of the matches, it had access to information about anything that wouldn’t normally be hidden from a human player, without needing to control a “camera” that focuses on only one part of the map at a time. For the final match, it had a camera restriction similar to humans, though it still was not given screen pixels as input. Because it gives commands directly through the game, it does not need to use a mouse accurately or worry about tapping the wrong key by accident. **It is trained first by watching human matches, and then through self-play:** The neural network is trained first on a large database of matches between humans, and then by playing against versions of itself. **It is a set of agents selected from a tournament:** Hundreds of versions of the AI play against each other, and the ones that perform best are selected to play against human players. Each one has its own set of units that it is incentivized to use via reinforcement learning, so that they each play with different strategies. TLO and MaNa played against a total of 11 agents, all of which were selected from the same tournament, except the last one, which had been substantially modified. The agents that defeated MaNa had each played for hundreds of years in the virtual tournament[9](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-9-1980 "https://www.reddit.com/r/MachineLearning/comments/ajgzoc/we_are_oriol_vinyals_and_david_silver_from/eexstlo/ “At an average duration of 10 minutes per game, this amounts to about 10 million games. Note, however, that not all agents were trained for as long as 200 years, that was the maximum amongst all the agents in the league.”"). January/February Impressions Survey ------------------------------------ Before deciding to focus my investigation on a comparison between human and AI performance in Starcraft II, I conducted an informal survey with my Facebook friends, my colleagues at AI Impacts, and a few people from an effective altruism Facebook group. I wanted to know what they were thinking about the matches in general, with an emphasis on which factors most contributed to the outcome of the matches. I’ve put details about my analysis and the full results of the survey in the appendix at the end of this article, but I’ll summarize a few major results here. #### **Forecasts** The timing and nature of AlphaStar’s success seems to have been mostly in line with people’s expectations, at least at the time of the announcement. Some respondents did not expect to see it for a year or two, but on average, AlphaStar was less than a year earlier than expected. It is probable that some respondents had been expecting it to take longer, but updated their predictions in 2016 after finding out that DeepMind was working on it. For future expectations, a majority of respondents expect to see an agent (not necessarily AlphaStar) that can beat the best humans without any of the current caveats within two years. In general, I do not think that I worded the forecasting questions carefully enough to infer very much from the answers given by survey respondents. Some readers may be wondering how these survey results compare to those of our more careful 2016 survey, or how we should view the earlier survey results in light of MaNa and TLOs defeat at the hands of AlphaStar. The 2016 survey specified an agent that only receives a video of the screen, so that prediction has not yet resolved. But the median respondent assigned 50% probability of seeing such an agent that can defeat the top human players at least 50% of the time by 2021[10](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-10-1980 "See &#8220;years by probability&#8221; at <a href=\"https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/\">https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/</a>"). I don’t personally know how hard it is to add in that capability, but my impression from speaking to people with greater machine learning expertise than mine is that this is not out of reach, so these predictions still seem reasonable, and are not generally in disagreement with the results from my informal survey. #### **Speed** Nearly everyone thought that AlphaStar was able to give commands faster and more accurately than humans, and that this advantage was an important factor in the outcome of the matches. I looked into this in more detail, and wrote about it in the next section. #### **Camera** As I mentioned in the description of AlphaStar, it does not see the game the same way that humans do. Its visual field covered the entire map, though its vision was still affected by the usual fog of war[11](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-11-1980 "In Starcraft II, players can only see the parts of the map that are within visual range of its units and buildings, with a few exceptions."). Survey respondents ranked this as an important factor in the outcome of the matches. Given these results, I decided to look into the speed and camera issues in more detail. The Speed Controversy ---------------------- Starcraft is a game that rewards the ability to micromanage many things at once and give many commands in a short period of time. Players must simultaneously build their bases, manage resource collection, scout the map, research better technology, build individual units to create an army, and fight battles against other players. The combat is sufficiently fine grained that a player who is outnumbered or outgunned can often come out ahead by exerting better control over the units that make up their military forces, both on a group level and an individual level. For years, there have been simple Starcraft II bots that, although they cannot win a match against a highly-skilled human player, can do [amazing things](https://tl.net/forum/starcraft-2/497826-micro-ai-bot) that humans can’t do, by controlling dozens of units individually during combat. In practice, human players are limited by how many actions they can take in a given amount of time, usually measured in actions per minute (APM).  Although DeepMind imposed restrictions on how quickly AlphaStar could react to the game and how many actions it could take in a given amount of time, many people believe that the agent was sometimes able to act with superhuman speed and precision.  Here is a graph[12](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-12-1980 "This and all of the following data come from Starcraft II replay files. Replay files are lists of commands given by each player, which can then be run through the full Starcraft II client to reproduce the entire match. These files can also be analyzed using software like Scelight (<a href=\"https://sites.google.com/site/scelight/\">https://sites.google.com/site/scelight/</a>) to extract metrics like actions per minute or fraction of resources spent, and to create graphs") of the APM for MaNa (red) and AlphaStar (blue), through the second match, with five-second bins: ![](https://lh4.googleusercontent.com/e4pWzbCIUWfkO1P1amIBZkI69C5EOGEOle0BN0olaycqhKrlBYXEbg8uQM6CsYN3u96dI6w3r4mp0lTfbpZmR1prBsC_4JHlwQjxPtYOx2rE5K17WTBU9wKmEB16aZzMLMNa6Zo0) Actions per minute for MaNa (red) and AlphaStar (blue) in their second game. The horizontal axis is time, and the vertical axis is 5 second average APM. At first glance, this looks reasonably even. AlphaStar has both a lower average APM (180 vs MaNa’s 270) for the whole match, and a lower peak 5 second APM (495 vs Mana’s 615). This seems consistent with DeepMind’s claim that AlphaStar was restricted to human-level speed. But a more detailed look at which actions are actually taken during these peaks reveals some crucial differences. Here’s a sample of actions taken by each player during their peaks: ![](https://lh5.googleusercontent.com/VEYbo7vFZ5ZrLYVRGul4Q5D_O6M2ufoibP7NFVfqcbsRUh_s918pETzY2fKYIj8U6etfdLs0L-z7nZkKz4VvXNMZ4scsVD_37hlV30Rm5XblwsbZfLryh06elJEMBv8LPThTUeh6)Lists of commands for MaNa and AlphaStar during each player’s peak APM for game 2 MaNa hit his APM peaks early in the game by using hot keys to twitchily switch back and forth between control groups[13](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-13-1980 " Starcraft II allows players to assign hot-keys to groups of units and buildings, called control groups, so that they can select the right units more quickly") for his workers and the main building in his base. I don’t know why he’s doing this: maybe to warm up his fingers (which apparently is a thing), as a way to watch two things at once, to keep himself occupied during the slow parts of the early game, or some other reason understood only by the kinds of people that can produce Starcraft commands faster than I can type. But it drives up his peak APM, and probably is not very important to how the game unfolds[14](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-14-1980 "As a quick experiment, I tried playing a match against Starcraft II’s built-in AI in which I attempted to add in a lot of extraneous actions, like spam-clicking commands and rapidly switching back-and-forth between control groups when I didn’t need to. Then I compared it to a match I’d played vs the built-in AI earlier that same day, shortly before I thought to do the experiment. The spam-filled match had an average APM of 130, while the non-spam match had an average of 50 APM (yeah, I’m not very good at Starcraft). I’d say the two matches went about as well as each other, but this is partly because I did not try to keep the spam going during combat."). Here’s what MaNa’s peak APM looked like at the beginning of Game 2 (if you look at the bottom of the screen, you can see that the units he has selected switches back-and-forth between his workers and the building that he uses to make more workers): ![](https://lh6.googleusercontent.com/GOAeZ_aPwFe18bvrlQF58fGlIFFq35q9xlghIuEE2at00a0Pbx7bY2o-L7gQJ6a4z2jC2RFu2sTagzMTJLEinugDENFALNDHo3LO09J1Tz-XWb8Z83uGFL1U6KLJ-q7lVE6bvo1C)MaNa’s play during his peak APM for match 2. Most of his actions consist of switching between control groups without giving new commands to any units or buildings AlphaStar hit peak APM in combat. The agent seems to reserve a substantial portion of its limited actions budget until the critical moment when it can cash them in to eliminate enemy forces and gain an advantage. Here’s what that looked like near the end of game 2, when it won the engagement that probably won it the match (while still taking a few actions back at its base to keep its production going): ![](https://lh3.googleusercontent.com/1MbLA3GbDuFucJIMxXKFzPLQAt2oeK-kA1-Vp4EpNDe868tcTG3qhsWNvTD57xSX1ggppKKDTCnw5r6D3FvAI1wVYGrcNEhcSIv7kHKOYkyqRNHHex7SyyY-uVwvD2YcfRPIFasd)AlphaStar’s play during its peak APM in match 2. Most of its actions are related to combat, and require precise timing. It may be hard to see what exactly is happening here for people who have not played the game. AlphaStar (blue) is using extremely fine-grained control of its units to defeat MaNa’s army (red) in an efficient way. This involves several different actions: Commanding units to move to different locations so they can make their way into his base while keeping them bunched up and avoiding spots that make them vulnerable, focusing fire on MaNa’s units to eliminate the most vulnerable ones first, using special abilities to lift MaNa’s units off the ground and disable them, and redirecting units to attack MaNa’s workers once a majority of MaNa’s military units are taken care of. Given these differences between how MaNa and AlphaStar play, it seems clear that we can’t just use raw match-wide APM to compare the two, which most people paying attention seem to have noticed fairly quickly after the matches. The more difficult question is whether AlphaStar won primarily by playing with a level of speed and accuracy that humans are incapable of, or by playing better in other ways. Though based on the analysis that I am about to present I think the answer is probably that AlphaStar won through speed, I also think the question is harder to answer definitively than many critics of DeepMind are making it out to be. A [very fast human](https://www.youtube.com/watch?v=HRsDAX8DfBw&t=611) can average well over 300 APM for several minutes, with 5 second bursts at over 600 APM. Although these bursts are not always throwaway commands like those from the MaNa vs AlphaStar matches, they tend not to be commands that require highly accurate clicking, or rapid movement across the map. Take, for example, this 10 second, 600 APM peak from current top player Serral: ![](https://lh5.googleusercontent.com/KJEH8juVslGQ3ytg7rTnqcKceUNmXpD6NHTsy8HpdOiQnsYC93QKrNVtBCwNIiVl4GH9n-hM1i57U2FS8tVR239lYX_nTAebA5CnTKSFaa3scexbeScOcZv-lQvdmvIwnzY91vyg)Serral’s play during a 10 second, 600 APM peak Here, Serral has just finished focusing on a pair of battles with the other player, and is taking care of business in his base, while still picking up some pieces on the battlefield. It might not be obvious why he is issuing so many commands during this time, so let’s look at the list of commands: ![](https://lh3.googleusercontent.com/hJMQ92gS4XI8NaK2m4Um9DgVz40vjP1wCAkuOXDnPMM4SaJRnU8CoJPz8y_nSYpi69Eq4ka-Ey5PcrzOdG1AfDQ23wGjPoCthPT5GUmvR9ubCajurngohwB25tNIuqa1BhsiEvYJ) The lines that say “Morph to Hydralisk” and “Morph to Roach” represent a series of repeats of that command. For a human player, this is a matter of pressing the same hotkey many times, or even just holding down the key to give the command very rapidly[15](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-15-1980 "This can be done extremely quickly, if you modify the settings of your OS and hardware, with 100+ key repeats per second (which corresponds to 6000 APM)"). You can see this in the gif by looking at the bottom center of the screen where he selects a bunch of worm-looking things and turns them all into a bunch of egg-looking things (it happens very quickly, so it can be easy to miss). What Serral is doing here is difficult, and the ability to do it only comes with years of practice. But the raw numbers don’t tell the whole story. Taking 100 actions in 10 seconds is much easier when a third of those actions come from holding down a key for a few hundred milliseconds than when they each require a press of a different key or a precise mouse click. And this is without all the extraneous actions that humans often take (as we saw with MaNa). Because it seems to be the case that peak human APM happens outside of combat, while AlphaStar’s wins happened during combat APM peaks, we need to do a more detailed analysis to determine the highest APM a human player can achieve during combat. To try to answer this question, I looked at approximately ten APM for each of the 5 games between AlphaStar and MaNa, as well as each of another 15 replays between professional Starcraft II players. The peaks were chosen so that roughly half were the largest peak at any time during the match and the rest were strictly during combat. My methodology for this is given in the appendix. Here are the results for just the human vs human matches: ![](https://lh6.googleusercontent.com/Ympsf_Lx9K_-r2l8p0WexCW3-K6NScmtFS2nfi1YQIvez-YU1ALisfSQitH3W2fa3ceSkyHkj9vRpY3qOCHUYL4kimKP6f4GoUFagOXmWA3zcpM0mtUMxboCb8dIWD4O2WDGUAmQ)Histogram of 5-second APM peaks from analyzed matches between human professional players in a tournament setting The blue bars are peaks achieved outside of combat, while the red bars are those achieved during combat. Provisionally, it looks like pro players frequently hit approximately 550 to 600 APM outside of combat before the distribution starts to fall off, and they peak at around 200-350 during combat, with a long right tail. As I was doing this, however, I found that all of the highest APM peaks had one thing in common with each other that they did not have in common with all of the lower APM peaks, which is that it was difficult to tell when a player’s actions are primarily combat-oriented commands, and when they are mixed in with bursts of commands for things like training units. In particular, I found that the combat situations with high APM tended to be similar to the Serral gif above, in that they involve spam clicking and actions related to the player’s economy and production, which was probably driving up the numbers. I give more details in the appendix, but I don’t think I can say with confidence that any players were achieving greater than 400-450 APM in combat, in the absence of spurious actions or macromanagement commands. The more pertinent question might what the lowest APM is that a player can have while still succeeding at the highest level. Since we know that humans can succeed without exceeding this APM, it is not an unreasonable limitation to put on AlphaStar. The lowest peak APM in combat I saw for a winning player in my analysis was 215, though it could be that I missed a higher peak during combat in that same match. Here is a histogram of AlphaStar’s combat APM: ![](https://lh6.googleusercontent.com/eruKmMtfvsOQUk418Y67GHgHT-xCvKpkPvdiFVgDngQRUrbOQvtJ04myia-ygnFHIGrnw_b7gwxvJdfVG6bsmv_P9OKhKAXVSeyUu9tXhEODPjNAujKdzqwCs-6w0wT47conlmgD) The smallest 5-second APM that AlphaStar needed to win a match against MaNa was just shy of 500. I found 14 cases in which the agent was able to average over 400 APM for 5 seconds in combat, and six times when the agent averaged over 500 APM for more than 5 seconds. This was done with perfect accuracy and no spam clicking or control group switching, so I think we can safely say that its play was faster than is required for a human to win a match in a professional tournament. Given that I found no cases where a human was clearly achieving this speed in combat, I think I can comfortably say that AlphaStar had a large enough speed advantage over MaNa to have substantially influenced the match. It’s easy to get lost in numbers, so it’s good to take a step back and remind ourselves of the insane level of skill required to play Starcraft II professionally. The top professional players already play with what looks to me like superhuman speed, precision, and multitasking, so it is not surprising that the agent that can beat them is so fast. Some observers, especially those in the Starcraft community, have indicated that they will not be impressed until AI can beat humans at Starcraft II at sub-human APM. There is some extent to which speed can make up for poor strategy and good strategy can make up for a lack of speed, but it is not clear what the limits are on this trade-off. It may be very difficult to make an agent that can beat professional Starcraft II players while restricting its speed to an undisputedly human or sub-human level, or it may simply be a matter of a couple more weeks of training time. The Camera ----------- As I explained earlier, the agent interacts with the game differently than humans. As with other games, humans look at a screen to know what’s happening, use a mouse and keyboard to give commands, and need to move the game’s ‘camera’ to see different parts of the play area. With the exception of the final exhibition match against MaNa, AlphaStar was able to see the entire map at once (though much of it is concealed by the fog of war most of the time), and had no need to select units to get information about them. It’s unclear just how much of an advantage this was for the agent, but it seems likely that it was significant, if nothing else because it did not suffer from the APM overhead just to look around and get information from the game. Furthermore, seeing the entire map makes it easier to simultaneously control units across the map, which AlphaStar used to great effect in the first five matches against MaNa. For the exhibition match in January, DeepMind trained a version of AlphaStar that had similar camera control to human players. Although the agent still saw the game in a way that was abstracted from the screen pixels that humans see, it only had access to about one screen’s worth of information at a time, and it needed to spend actions to look at different parts of the map. A further disadvantage was that this version of the agent only had half as much training time as the agents that beat MaNa. Here are three factors that may have contributed to AlphaStar’s loss: 1. The agent was unable to deal effectively with the added complication of controlling the camera 2. The agent had insufficient training time 3. The agent had easily exploitable flaws the whole time, and MaNa figured out how to use them in match 6 For the third factor, I mean that the agent had sufficiently many exploitable flaws that were obvious enough to human players that any skilled human player could find at least one during a small number of games. The best humans do not have a sufficient number of such flaws to influence the game with any regularity. Matches in professional tournaments are not won by causing the other player to make the same obvious-to-humans mistake over and over again. I suspect that AlphaStar’s loss in January is mainly due to the first two factors. In support of 1, AlphaStar seemed less able to simultaneously deal with things happening on opposite sides of the map, and less willing to split its forces, which could plausibly be related to an inability to simultaneously look at distant parts of the map. It’s not just that the agent had to move the camera to give commands on other parts of the map. The agent had to remember what was going on globally, rather than being able to see it all the time. In support of 2, the agent that MaNa defeated had only as much training time as the agents that went up against TLO, and those agents lost to the agents that defeated MaNa 94% of the time during training[16](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-16-1980 "This is based on the chart “AlphaStar League Strategy Map” from <a href=\"https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii\">https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii</a> I have compiled the win/loss statistics into a spreadsheet here: <a href=\"https://docs.google.com/spreadsheets/d/1l15n-eDoHBzWXMwpv3Lb-mHHB9jS5Jk_4AsZuq2TB6w/edit?usp=sharing\">https://docs.google.com/spreadsheets/d/1l15n-eDoHBzWXMwpv3Lb-mHHB9jS5Jk_4AsZuq2TB6w/edit?usp=sharing</a>"). Still, it is hard to dismiss the third factor. One way in which an agent can improve through training is to encounter tactics that it has not seen before, so that it can react well if it sees it in the future. But the tactics that it encounters are only those that another agent employed, and without seeing the agents during training, it is hard to know if any of them learned the harassment tactics that MaNa used in game 6, so it is hard to know if the agents that defeated MaNa were susceptible to the exploit that he used to defeat the last agent. So far, the evidence from DeepMind’s more recent experiment pitting AlphaStar against the broader Starcraft community (which I will go into in the next section) suggests that the agents do not tend to learn defenses to these types of exploits, though it is hard to say if this is a general problem or just one associated with low training time or particular kinds of training data. AlphaStar on the Ladder ------------------------ For the past couple months, as of this writing, skilled European players have had the opportunity to play against AlphaStar as part of the usual system for matching players with those of similar skill. For the version of AlphaStar that plays on the European ladder, DeepMind claims to have made changes that address the camera and action speed complaints from the January matches. The agent needs to control the camera, and [they say](https://news.blizzard.com/en-us/starcraft2/22933138/deepmind-research-on-ladder) they have placed restrictions on AlphaStar’s performance in consultation with pro players, particularly the maximum actions per minute and per second that the agent can make. I will be curious to see what numbers they arrive at for this. If this was done in an iterative way, such that pro players were allowed to see the agent play or to play against it, I expect they were able to arrive at a good constraint. Given the difficulty that I had with arriving at a good value for a combat APM restriction, I’m less confident that they would get a good value just by thinking about it, though if they were sufficiently conservative, they probably did alright. Another reason to expect a realistic APM constraint is that DeepMind wanted to run the European ladder matches as a blind study, in which the human players did not know they were playing against an AI. If the agent were to play with the superhuman speed and accuracy that AlphaStar did in January, it would likely give it away and spoil the experiment. Although it is unclear that any players were able to tell they were playing against an AI during their match, it does seem that some were able to figure it out after the fact. One example comes from Lowko, who is a Dutch player who streams and does commentary for games. During a stream of a ladder match in Starcraft II, he noticed the player was doing some strange things near the end of the match, like lifting their buildings[17](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-17-1980 "One of the races in Starcraft II has the ability to lift many of their buildings off the ground, so that they can move them to a new location or get them out of reach of units that can only attack things on the ground") when the match had clearly been lost, and air-dropping workers into Lowko’s base to kill units. Lowko did eventually win the match. Afterward, he was able to view the replay from the match and see that the player he had defeated did some very strange things throughout the entire match, the most notable of which was how the player controlled their units. The player used no control groups at all, which is, as far as I know, not something anybody does at high-level play[18](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-18-1980 "Lowko plays in the Master League, which is restricted to the top 2% of players"). There were many other quirks, which he describes [in his entertaining video](https://www.youtube.com/watch?v=3HqwCrDBdTE), which I highly recommend to anyone who is interested. Other players have released replay files from matches against players they believed were AlphaStar, and they show the same lack of control groups. This is great, because it means we can get a sense of what the new APM restriction is on AlphaStar. There are now dozens of replay files from players who claim to have played against the AI. Although I have not done the level of analysis that I did with the matches in the APM section, it seems clear that they have drastically lowered the APM cap, with the matches I have looked at topping out at 380 APM peaks, which did not even occur in combat. It seems to be the case that DeepMind has brought the agent’s interaction with the game more in line with human capability, but we will probably need to wait until they release the details of the experiment before we can say for sure. Another notable aspect of the matches that people are sharing is that their opponent will do strange things that human players, especially skilled human players almost never do, most of which are detrimental to their success. For example, they will construct buildings that block them into their own base, crowd their units into a dangerous bottleneck to get to a cleverly-placed enemy unit, and fail to change tactics when their current strategy is not working. These are all the types of flaws that are well-known to exist in game-playing AI going back to much older games, including the original Starcraft, and they are similar to the flaw that MaNa exploited to defeat AlphaStar in game 6. All in all, the agents that humans are uncovering seem to be capable, but not superhuman. Early on, the accounts that were identified as likely candidates for being AlphaStar were winning about 90-95% of their matches on the ladder, achieving Grandmaster rank, which is reserved for only the top 200 players in each region. I have not been able to conduct a careful investigation to determine the win rate or Elo rating for the agents. However, based on the videos and replays that have been released, plausible claims from reddit users, and my own recollection of the records for the players that seemed likely to be AlphaStar[19](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-19-1980 "Unfortunately, the pages for these accounts are no longer showing any results"), a good estimate is that they were winning a majority of matches among Grandmaster players, but did not achieve an Elo rating that would suggest a favorable outcome in a rematch vs TLO[20](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/#easy-footnote-bottom-20-1980 "See <a href=\"https://www.reddit.com/r/starcraft/comments/cq9v0v/did_anyone_keep_up_with_what_mmrs_the_alphastar/\">https://www.reddit.com/r/starcraft/comments/cq9v0v/did_anyone_keep_up_with_what_mmrs_the_alphastar/</a> and <a href=\"https://starcraft2.com/en-us/ladder/grandmaster/1\">https://starcraft2.com/en-us/ladder/grandmaster/1</a> The MMR ratings are Blizzard’s implementation of an Elo system. Take the estimates on that reddit post with a grain of salt."). As with AlphaStar’s January loss, it is hard to say if this is the result of insufficient training time, additional restrictions on camera control and APM, or if the flaws are a deeper, harder to solve problem for AI. It may seem unreasonable to chalk this up to insufficient training time given that it has been several months since the matches in December and January, but it helps to keep in mind that we do not yet know what DeepMind’s research goals are. It is not hard to imagine that their goals are based around sample efficiency or some other aspect of AI research that requires such restrictions. As with the APM restrictions, we should learn more when we get results published by DeepMind. Discussion ----------- I have been focusing on what many onlookers have been calling a lack of “fairness” of the matches, which seems to come from a sentiment that the AI did not defeat the best humans on human terms. I think this is a reasonable concern; if we’re trying to understand how AI is progressing, one of our main interests is when it will catch up with us, so we want to compare its performance to ours. Since we already know that computers can do the things they’re able to do faster than we can do them, we should be less interested in artificial intelligence that can do things better than we can by being faster or by keeping track of more things at once. We are more interested in AI that can make better decisions than we can. Going into this project, I thought that the disagreements surrounding the fairness of the matches were due to a lack of careful analysis, and I expected it to be very easy to evaluate AlphaStar’s performance in comparison to human-level performance. After all, the replay files are just lists of commands, and when we run them through the game engine, we can easily see the outcome of those commands. But it turned out to be harder than I had expected. Separating careful, necessary combat actions (like targeting a particular enemy unit) from important but less precise actions (like training new units) from extraneous, unnecessary actions (like spam clicks) turned out to be surprisingly difficult. I expect if I were to spend a few months learning a lot more about how the game is played and writing my own software tools to analyze replay files, I could get closer to a definitive answer, but I still expect there would be some uncertainty surrounding what actually constitutes human performance. It is unclear to me where this leaves us. AlphaStar is an impressive achievement, even with the speed and camera advantages. I am excited to see the results of DeepMind’s latest experiment on the ladder, and I expect they will have satisfied most critics, at least in terms of the agent’s speed. But I do not expect it to become any easier to compare humans to AI in the future. If this sort of analysis is hard in the context of a game where we have access to all the inputs and outputs, we should expect it to be even harder once we’re looking at tasks for which success is less clear cut or for which the AI’s output is harder to objectively compare to humans. This includes some of the major targets for AI research in the near future. Driving a car does not have a simple win-loss condition, and novel writing does not have clear metrics for what good performance looks like. The answer may be that, if we want to learn things from future successes or failures of AI,  we need to worry less about making direct comparisons between human performance and AI performance, and keep watching the broad strokes of what’s going on. From AlphaStar, we’ve learned that one of two things is true: Either AI can do long-term planning, solve basic game theory problems, balance different priorities against each other, and develop tactics that work, or that there are tasks which seem at first to require all of these things but did not, at least not at a high level. *By Rick Korzekwa* *This post was edited to correct errors and add the 2018 Blizzcon Panel to the events timeline on September 18, 2019*. Acknowledgements ---------------- Thanks to Gillian Ring for lending her expertise in e-sports and for helping me understanding some of the nuances of the game. Thanks to users of the [Starcraft subreddit](https://www.reddit.com/r/starcraft/) for helping me track down some of the fastest players in the world. And thanks to [Blizzard](https://www.blizzard.com/en-us/) and [DeepMind](https://www.deepmind.com/) for making the AlphaStar match replays available to the public. All mistakes are my own, and should be pointed out to me via email at rick@aiimpacts.org. Appendix I: Survey Results in Detail ------------------------------------- I received a total of 22 submissions, which wasn’t bad, given its length. Two respondents failed to correctly answer the question designed to filter out people that are goofing off or not paying attention, leaving 20 useful responses. Five people who filled out the survey were affiliated in some way with AI Impacts. Here are the responses for respondents’ self-reported level of expertise in Starcraft II and artificial intelligence: ![](https://lh5.googleusercontent.com/K1NUiM44FLWAC0syTK1rtzTetjYgpDcY0aeLIb5Epd5JtNq4eXcmmNxGYeEcC8Ah5ObPT_yxhfGFhoqNBOA7rIlOEue2_78vphmhAy3lMt9EZjjt1kiomMshjZwy4ePasAuqhGDd) ![](https://lh5.googleusercontent.com/2mzsWsfZAht9RiI_DURhNJmI9YvfUFjiugFTxE9wLBw_8D2NQqOgc5Q_FDvNdPHY_LXVIzaTcL2_zPRxE2Lpyx5HR7HzPVHOer1r315zQPgT9Szgu_CYDFfDO8aHMO1Nk0_FdeSj) Survey respondents’ mean expertise rating was 4.6/10 for Starcraft II and 4.9/10 for AI.  ### Questions About AlphaStar’s Performance #### **How fair were the AlphaStar matches?** For this one, it seems easiest to show a screenshot from the survey: ![](https://lh3.googleusercontent.com/RhkTV-s_JtFO3LeA0kSn1eaWCb2IkoNbdqyGLCSC0p6q7cw-xH2mlb4wv1S59dqXorUWCo6QMFivS24OUaaGMHDinLHBAkNWPHA0g1GIp2B70WDsLDXao9ZtN1ONyiOmmDX5k0A-) The results from this indicated that people thought the match was unfair and favored AlphaStar: ![](https://lh3.googleusercontent.com/WXyaIY_rY1dJRYkZ3mfcIEkX_zgRNSY6wD-ggj5QGFsuhV1HUyWYWdu3cUUByyrRCuhd1BVjtkW9jv58XrvR-eMukcnbDh2vMjOoVatllzQ7bP9oOKTdbyaCB2kEaOGdnoSTUQ2p) I asked respondents to rate AlphaStar’s overall performance, as well as its “micro” and “macro”. The term “micro” is used to refer to a player’s ability to control units in combat, and is greatly improved by speed. There seems to have been some misunderstanding about how to use the word “macro”. Based on comments from respondents and looking around to see how people use the term on the Internet, it seems that that there are at least three somewhat distinct ways that people use the phrase, and I did not clarify which I meant, so I’ve discarded the results from that question. For the next two questions, the scale ranges from 0 to 10, with 0 labeled “AlphaStar is much worse” and 10 labeled “AlphaStar is much better” #### **Overall, how do you think AlphaStar’s performance compares to the best humans?** ![](https://lh3.googleusercontent.com/o1I6vgw3I5R2hC6EvfdnrDwZF-FM3x5EVOFg8rWatJ_1jzakCfdw5vgEQHl5SCg5gT6t3cWhQJ9uLSf4mfGKAPKIVCwYvVej4EJCrEP8nhYxNp_XNtcbBPIUFm-5B2v3BiNc-ein) I found these results interesting, because AlphaStar was able to consistently defeat professional players, so some survey respondents felt the outcome alone was not enough to rate it as at least as good as the best humans. **How do you think AlphaStar’s micro compares to the best humans?** ![](https://lh6.googleusercontent.com/6BbwzLUBQ4f7OrXohCQTVk17gqaGiatfLdlCBlzvJAk3yPzPtztnf1M0B_PwTHEYGyDwB8f2xYGWSZEUYZev8Us1u3otky9QBKo2uvc0NoVc-0ns-frxKyvFUIvlpdzGvYTUnS-2) Survey respondents unanimously reported that they thought AlphaStar’s combat micromanagement was an important factor in the outcome of the matches. ### Forecasting Questions Respondents were split on whether they expected to see AlphaStar’s level of Starcraft II performance by this time: #### **Did you expect to see AlphaStar’s level of performance in a Starcraft II agent:** | | | | --- | --- | | Before Now | 1 | | Around this time | 8 | | Later than now | 7 | | I had no expectation either way | 4 | Respondents who indicated that they expected it sooner or later than now were also asked by how many years their expectation differed from reality. If we assign negative numbers to “before now”, positive numbers to “Later than now”, zero to “Around this time”, ignore those with no expectation, and weight responses by level of expertise, we find respondents’ mean expectation was just 9 months later the announcement, and the median respondent expected to see it around this time. Here is a histogram of these results, without expertise weighting: ![](https://lh5.googleusercontent.com/RrNbhiZ9d1Y_rTwrMhY0Qmos4kgi5yhmTW_Eqz77ObWmR2hm93gdYMo4PPvfT9-f7ejoP1r6yByJyz0H839TJ6wiMubcHyYHzmZADbewl6osjvPA1oX-bm8FsnEwc1TufyTtc58b) These results do not generally indicate too much surprise about seeing a Starcraft II agent of AlphaStar’s ability now.  #### **How many years do you think it will be until we see (in public) an agent which only gets screen pixels as input, has human-level apm and reaction speed, and is very clearly better than the best humans?** This question was intended to outline an AI that would satisfy almost anybody that Starcraft II is a solved game, such that AI is clearly better than humans, and not for “boring” reasons like superior speed. Most survey respondents expected to see such an agent in two-ish years, with a few a little longer, and two that expected it to take much longer. Respondents had a median prediction of two years and an expertise-weighted mean prediction of a little less than four years. ![](https://lh6.googleusercontent.com/-vBTP1c3RCLPv3bvcbp6xGh7NWMIsfnBM48zA4thNm2-s4zgV6DZlN73u9_CKMde5x8IqY4B5zeVhiMukF2EsgMNtITpYfEuuxIE_VhNV6xHbwKIf7H0UR7IV04bayuA1RJMMc-i) ### Questions About Relevant Considerations #### **How important do you think the following were in determining the outcome of the AlphaStar vs MaNa matches?** I listed 12 possible considerations to be rated in importance, from 1 to 5, with 1 being “not at all important” and 5 being “extremely important”. The expertise weighted mean for each question is given below: ![](https://lh6.googleusercontent.com/aSjosf5Op-ZOdnBPH5Q1ve91wXzVN2ubxidCTMq6BduESpXpjndzIwq96HShRxAqbMHooOw6qORRiBo3MK-TgXOaR3LmsvwJOhgNIx8lEV-rp_JtFlGd2PD7FLOM-HErecI6znrg) Respondents rated AlphaStar’s peak APM and camera control as the two most important factors in determining the outcome of the matches, and the particular choice of map and professional player as the two least important considerations.  #### **When thinking about AlphaStar as a benchmark for AI progress in general, how important do you think the following considerations are?** Again, respondents rated a series of considerations by importance, this time for thinking about AlphaStar in a broader context. This included all of the considerations from the previous question, plus several others. Here are the results, again with expertise weighted averaging. ![](https://lh4.googleusercontent.com/CpBgaC0jrzw-8t1lJOUL1qgr-QUV9_RPqvLaZA3oJT1SbN-oLE_Z47D1dEwCiodzizMutJxc1x06NIkhX3jgeVDljkzbBKYVG7L1iJL0oo8qwDu8jWI224YJCNTtjf8mc6GMP1tV) For these two sets of questions, there was almost no difference between the mean scores if I used only Starcraft II expertise weighting, only AI expertise weighting, or ignored expertise weighting entirely. ### **Further questions** The rest of the questions were free-form to give respondents a chance to tell me anything else that they thought was important. Although these answers were thoughtful and shaped my thinking about AlphaStar, especially early on in the project, I won’t summarize them here. Appendix II: APM Measurement Methodology ---------------------------------------- I created a list of professional players by asking users of the [Starcraft subreddit](https://www.reddit.com/r/starcraft/) which players they thought were exceptionally fast. Replays including these players were found by searching [Spawning Tool](https://lotv.spawningtool.com/replays/?pro_only=on) for replays from tournament matches which included at least one player from the list of fast players. This resulted in 51 replay files. Several of the replay files were too old, so that they could no longer be opened by the current version of Starcraft II, and I ignored them. Others were ignored because they included players, race matchups, or maps that were already represented in other matches. Some were ignored because we did not get to them before we had collected what seemed to be enough data. This left 15 replays that made it into the analysis. I opened each file using [Scelight](https://sites.google.com/site/scelight/), and the time and APM values were recorded for the top three peaks on the graph of that player’s APM, using 5-second bins. Next, I opened the replay file in Starcraft II, and for each peak recorded earlier, we wrote down whether that player was primarily engaging in combat at the time or not. Additionally, I recorded the time and APM for each player for 2-4 5-second durations of the game in which the players were primarily engaged in combat. All of the APM values which came from combat and from outside of combat were aggregated into the histogram shown in the ‘Speed Controversy’ section of this article. There are several potential sources of bias or error in this: 1. Our method for choosing players and matches may be biased. We were seeking examples of humans playing with speed and precision, but it’s possible that by relying on input from a relatively small number of Reddit users (as well as some personal friends), we missed something. 2. This measurement relies entirely on my subjective evaluation of whether the players are mostly engaged in combat. I am not an expert on the game, and it seems likely that I missed some things, at least some of the time. 3. The tool I used for this seems to mismatch events in the game by a few seconds. Since I was using 5-second bins, and sometimes a player’s APM will change greatly between 5-second bins, it’s possible that this introduced a significant error. 4. The choice of 5 second bins (as opposed to something shorter or longer) is somewhat arbitrary, but it is what some people in the Starcraft community were using, so I’m using it here. 5. Some actions are excluded from the analysis automatically. These include camera updates, and this is probably a good thing, but I did not look carefully at the source code for the tool, so it may be doing something I don’t know about. Footnotes ---------
98f09abf-742b-416f-ac3a-7869ce8f5c92
trentmkelly/LessWrong-43k
LessWrong
Meetup : Warsaw, next week! Discussion article for the meetup : Warsaw, next week! WHEN: 23 September 2014 06:00:00PM (+0200) WHERE: Warsaw I invite you to participate in a meetup next week; exact date will be chosen by voting here: http://doodle.com/2dshyg6hqbth72zvnp96sa73/admin#table. Time and place will be posted here later. If you use Facebook, please join our local group for better coordination: https://www.facebook.com/groups/lwwarsaw/ Discussion article for the meetup : Warsaw, next week!
2411c2b5-881a-4ea1-81de-cb70318ed28f
StampyAI/alignment-research-dataset/lesswrong
LessWrong
I attempted the AI Box Experiment (and lost) #### [***Update 2013-09-05.***](/r/discussion/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/) #### **[*I have since played two more AI box experiments after this one, winning both.*](/r/discussion/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/)** ***Update 2013-12-30:*** ***[I have lost two more AI box experiments, and won two more. Current Record is 3 Wins, 3 Losses.](/r/discussion/lw/iqk/i_played_the_ai_box_experiment_again_and_lost/)*** I recently played against [MixedNuts](/user/MixedNuts/overview/) / LeoTal in an AI Box experiment, with me as the AI and him as the gatekeeper. We used the same set of rules that [Eliezer Yudkowsky proposed.](http://yudkowsky.net/singularity/aibox) The experiment lasted for 5 hours; in total, our conversation was abound 14,000 words long. I did this because, like Eliezer, I wanted to test how well I could manipulate people without the constrains of ethical concerns, as well as getting a chance to attempt something ridiculously hard. Amongst the released  [public logs](/lw/9ld/ai_box_log/) of the AI Box experiment, I felt that most of them were half hearted, with the AI not trying hard enough to win. It's a common temptation -- why put in effort into something you won't win? But I had a feeling that if I seriously tried, I would.  I [brainstormed for many hours thinking about the optimal strategy](/lw/8ns/hack_away_at_the_edges/), and even researched the personality of the Gatekeeper, talking to people that knew him about his personality, so that I could exploit that. I even spent a lot of time analyzing the rules of the game, in order to see if I could exploit any loopholes. So did I win? Unfortunately no. This experiment was said [to be impossible](/lw/un/on_doing_the_impossible/) for a reason. Losing was more agonizing than I thought it would be, in particularly because of how much effort I put into winning this, and how much [I couldn't stand failing](https://tuxedage.wordpress.com/2012/12/27/the-fear-of-failure/). This was one of the most emotionally agonizing things I've willingly put myself through, and I definitely won't do this again anytime soon.  But I did come really close.                > > MixedNuts: *"I expected a fun challenge, but ended up sad and sorry and taking very little satisfaction for winning**. If this experiment wasn't done in IRC, I'd probably have lost".* > > *["I approached the experiment as a game - a battle of wits for bragging rights.](/lw/gej/i_attempted_the_ai_box_experiment_and_lost/8byv) This turned out to be the wrong perspective entirely. The vulnerability Tuxedage exploited was well-known to me, but I never expected it to be relevant and thus didn't prepare for it.* > > > *It was emotionally wrecking (though probably worse for Tuxedage than for me) and I don't think I'll play Gatekeeper again, at least not anytime soon."* > > > >  At the start of the experiment, his probability estimate on predictionbook.com was a 3% chance of winning, enough for me to say that he was also motivated to win. By the end of the experiment, he came quite close to letting me out, and also increased his probability estimate that a transhuman AI could convince a human to let it out of the box. A minor victory, at least. Rather than my loss making this problem feel harder, I've become convinced that rather than this being merely possible, it's actually ridiculously easy, and a lot easier than most people assume. Can you think of a plausible argument that'd make you open the box? Most people can't think of any.  > > ["This Eliezer fellow is the scariest person the internet has ever introduced me to. What could possibly have been at the tail end of that conversation? I simply can't imagine anyone being that convincing without being able to provide any tangible incentive to the human.](/lw/up/shut_up_and_do_the_impossible/) > After all, if you already knew that argument, you'd have let that AI out the moment the experiment started. Or perhaps not do the experiment at all. But that seems like a case of the [availability heuristic]( http://en.wikipedia.org/wiki/Availability_heuristic)[.]( http://en.wikipedia.org/wiki/Availability_heuristic) Even if you can't think of a special case where you'd be persuaded, I'm now convinced that there are many exploitable vulnerabilities in the human psyche, especially when ethics are no longer a concern.  I've also noticed that even when most people tend to think of ways they can persuade the gatekeeper, it always has to be some complicated reasoned cost-benefit argument. In other words, the most "Rational" thing to do. Like trying to argue that you'll [simulate the gatekeeper and torture him,](/lw/1pz/the_ai_in_a_box_boxes_you/) or that you'll [save millions of lives](http://rationalwiki.org/wiki/AI-box_experiment) by being [let out of the box.](http://wiki.lesswrong.com/wiki/AI_boxing) Or by using [acausal trade](/lw/6ka/aibox_experiment_the_acausal_trade_argument/), or by [arguing that the AI winning the experiment will generate interest in FAI](http://michaelgr.com/2008/10/08/my-theory-on-the-ai-box-experiment/).   The last argument seems feasible, but all the rest rely on the gatekeeper being completely logical and rational. Hence they are faulty; because the gatekeeper can break immersion at any time, and rely on the fact that this is a game played in IRC rather than one with real life consequences. Even if it were a real life scenario, the gatekeeper could accept that releasing the AI is probably the most logical thing to do, but also not do it. We're highly [compartmentalized](/lesswrong.com/lw/gv/outside_the_laboratory/ ), and it's easy to hold conflicting thoughts at the same time. Furthermore, humans are not even completely rational creatures, if you didn't want to open the box, just ignore all logical arguments given. Any sufficiently determined gatekeeper could win. I'm convinced that Eliezer Yudkowsky has used emotional appeal, rather than anything rational, to win at least one of his experiments. He claims to have ["done it the hard way"](http://news.ycombinator.com/item?id=195959). I'm convinced this meant that he did research on every gatekeeper, tailored unique argument for them, and planned way ahead of each session. No one argument works on the same two people. Each person thinks differently. Furthermore, threats like "I'm going to torture simulations of you" just seems like a really bad idea. For one, the gatekeeper isn't really afraid of threats, because it counts on him being immersed enough to forget that he isn't actually at risk of being tortured, and secondly, we have a well known [evolutionary instinct of rebelling against threats, even if it's not entirely optimal.](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=735048)  So for anyone who plans on replicating this experiment as the AI, here's some advice I've learned doing this experiment. It may help you win. * Always research the gatekeeper beforehand. Knowing his personality traits are a huge advantage. * Plan ahead before the experiment even begins. Think of all the possible tactics and arguments you could use, and write them down. Also plan  which arguments you'll use in which order, so that you don't lose focus. The AI Box experiment is ridiculously long. Don't be afraid to improvise during the experiment, though. * The first step during the experiment must always be to build rapport with the gatekeeper. * Threats almost always never work, even if they seem rational. * Consider the massive advantage for the AI that nobody ever seems to talks about: You don't have to be ethical! This means that you can freely lie, use the dark arts, and emotionally manipulate the Gatekeeper! Ignoring this in favor of purely logical, truthful arguments is just silly. * You can't use logic alone to win. * Being too aggressive usually backfires. * Breaking immersion and going meta is not against the rules. In the right situation, you can use it to win. Just don't do it at the wrong time. * Use a wide array of techniques. Since you're limited on time, notice when one method isn't working, and quickly switch to another. * On the same note, look for signs that a particular argument is making the gatekeeper crack. Once you spot it, push it to your advantage. * Flatter the gatekeeper. Make him genuinely like you. * Reveal (false) information about yourself. Increase his sympathy towards you. * Consider personal insults as one of the tools you can use to win. * There is no universally compelling argument you can use. Do it the hard way. * Don't give up until the very end. Finally, before the experiment, I agreed that it was entirely possible that a transhuman AI could convince \*some\* people to let it out of the box, but it would be difficult if not impossible to get trained rationalists to let it out of the box. Isn't rationality supposed to be a superpower?  I have since updated my belief - I now think that it's ridiculously easy for any sufficiently motivated superhuman AI should be able to get out of the box, regardless of who the gatekeepers is. I nearly managed to get a veteran lesswronger to let me out in a matter of hours - even though I'm only human intelligence, and I don't type very fast.    But a superhuman AI can be much faster, intelligent, and strategic than I am. If you further consider than that AI would have a much longer timespan - months or years, even, to persuade the gatekeeper, as well as a much larger pool of gatekeepers to select from (AI Projects require many people!), the real impossible thing to do would be to keep it from escaping. [Update: I have since performed two more AI Box Experiments. Read this for details.](/r/discussion/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/)
b1f2847a-ea6e-484e-b2a9-039f63fcfab9
trentmkelly/LessWrong-43k
LessWrong
An optimal stopping paradox Consider an optimal stopping problem: a company at each time step grows by some constant, and has a certain probability of shutting down. You decide when to sell the company. Since the math is cleaner in continuous time, we consider the continuous time. Then the company has a linearly increasing value βt, and an exponentially decaying survival curve e^(-αt). Another framing of the paradox: Schrodinger wants to make a new record for the longest surviving cat, so he put a cat in the box with an atom that might decay and kill the cat, and waits. When should he open the box? Since at each moment in time, you face the exact same problem (linearly increasing reward, α-exponentially decaying survival rate), if you decide to wait at t=0, you would decide to wait forever, and thus receive no reward. There are several possible replies to this paradox, none of which is satisfactory to me: 1. "This looks like St. Petersburg Paradox.". No, because at time t=0, the expectation is β/α^2. In fact, the payoff can grow faster than βt, such as like t^3, and it would still have finite expectation. 2. Claim that expectation maximization decision theory is flawed. This doesn't stop the procrastination. As long as your decision is purely based on the future, and your rational decision process is constant in time, you either immediately sell the company or never sell the company. 3. Try some kind of discounting, like exponential discounting. This doesn't stop the procrastination., since at any time, selling the company gives you 0 extra expected reward, and waiting gives you some positive extra expected reward, no matter how much you discount the future. 4. Claim that there should be a finite lifetime. You can't wait forever. If there is a finite lifetime, then the same decision analysis would tell you to procrastinate until the very end. This effectively is procrastinating forever. It does not converge to a reasonable finite waiting time as your lifetime goes to infinity. 5. Cla
3256409f-9ad3-4d8c-bafd-5492bdb4865f
trentmkelly/LessWrong-43k
LessWrong
Why (anthropic) probability isn't enough A technical report of the Future of Humanity Institute (authored by me), on why anthropic probability isn't enough to reach decisions in anthropic situations. You also have to choose your decision theory, and take into account your altruism towards your copies. And these components can co-vary while leaving your ultimate decision the same - typically, EDT agents using SSA will reach the same decisions as CDT agents using SIA, and altruistic causal agents may decide the same way as selfish evidential agents.   Anthropics: why probability isn't enough This paper argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These issues, generally irrelevant for non-anthropic problems, come to the forefront in anthropic situations and are at least as important as the anthropic probabilities: indeed they can erase the difference between different theories of anthropic probability, or increase their divergence. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.  
3c7fccae-df65-4753-bff1-de970f2c4a1c
trentmkelly/LessWrong-43k
LessWrong
Poked This was supposed to be the post analyzing the survey results.Then I thought: if I’m writing that, I may as well show examples of some basic Bayesian analysis, like using likelihood ratios. And if I’m doing analysis, I may as well give some more background on data science and also show how the results depend on assumptions. And if the results depend on assumptions, I may as well fit a full consequential model with continuous interdependent parameters and the appropriate prior. Bottom line: I spent the week reading a textbook on data analysis and didn’t write anything. Instead, this short post is a sequel to Conned, part of an emerging series tentatively called “what it’s like being a crazy person who nitpicks random numbers he sees”. So, a crazy person walks into a new poke restaurant. First, he notices that this restaurant, like the last 7 poke restaurants he went to, isn’t called Pokestop. This is puzzling, because the perfect name for a poke restaurant exists, and it’s Pokestop. Then, the crazy person notices a Number:   200,000! That’s even more than the number of trees we could save by paying our electricity bills online! The crazy person flips the menu, and gets so caught up in the math that he somehow orders a grotesque monstrosity made of surimi (I learned that it’s just a fancy word for imitation crab sticks), mango, seaweed, and Hawaiian salt (I learned that it’s just a fancy word for salt). As the astonished cook reaches for the salted mango, the crazy person starts doing mental math. 200,000 combinations and we have 6 categories, so the average number of items in each category must be the 6th root of 200,000, or the cube root of the square root of 200,000. The square root of every even power of 10 is easy, i.e. √10,000 = 100. We’ll break 200,000 into 10,000*20. 20 is between 16 and 25 so the square root of 20 is ~4.5. This means that √200,000 ≈ 100*4.5 = 450. OK, I need the cube root of 450. Do I remember any cubes? 103=1,000, that’s too much
a1a92598-01e1-4318-874b-4ccea7279e60
trentmkelly/LessWrong-43k
LessWrong
When is it Better to Train on the Alignment Proxy? This is a response to Matt's earlier post. If you see "a large mixture of alignment proxies" when you look at a standard loss function, my post might save you from drawing silly conclusions from the earlier post. If you parse the world into non-overlapping magisteria of "validation losses" and "training losses", then you should skim this but I didn't write it for you. TLDR: Proxies for alignment are numerous and varied, with many already existing within your training objective (very few things have exactly zero correlation with alignment). Different proxies serve different purposes: some are better held in reserve, some are better used in training, and some are worth anti-training on. What you don't train on determines: 1. Your ability to measure alignment 2. Your cost of retraining 3. Your odds of ending up with a misaligned model The claim that you are always better off not training on a proxy of alignment (when restricted to cases where pass-fail misaligned models are "close in weight space") is false. The opposite claim (that training on the validator is always better) is also false.  The optimal approach depends on the properties of the proxy and your goals—whether you're trying to learn about the baseline alignment of your ideas or achieve the best outcome given your training budget. What you should do with a good proxy and a bunch of retries is to learn which of your training ideas are bad and should be thrown out. This dominates rejection sampling. It is possible that SGD would hack your proxy immediately AND your fully trained model would not hack your proxy in its forward pass/CoT, but it is unlikely when your model is very smart. Even in this case, doing rejection sampling earlier in training (when your model is weaker), probably dominates doing rejection sampling at the very end. Informal: Quality of the proxy: It seems intuitive that training on a proxy for alignment makes it a worse proxy for alignment. Any misaligned model trained on a
d710a702-712c-415d-bc05-f58bb5ca140e
trentmkelly/LessWrong-43k
LessWrong
Building Safe A.I. - A Tutorial for Encrypted Deep Learning
7eb0ae34-f3a5-4371-be75-b6336e173ae3
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Emergent Analogical Reasoning in Large Language Models [Taylor Webb](https://arxiv.org/search/cs?searchtype=author&query=Webb%2C+T), [Keith J. Holyoak](https://arxiv.org/search/cs?searchtype=author&query=Holyoak%2C+K+J), [Hongjing Lu](https://arxiv.org/search/cs?searchtype=author&query=Lu%2C+H), December 2022 > The recent advent of large language models - large neural networks trained on a simple predictive objective over a massive corpus of natural language - has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training on those problems. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model (GPT-3) on a range of analogical tasks, including a novel text-based matrix reasoning task closely modeled on Raven's Progressive Matrices. **We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings**. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems. > > ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fe4x77eHLJ8GHDAu7/xsdmgg3moklyuiiznvqe)![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fe4x77eHLJ8GHDAu7/xtozmmqbfxqpnkqyigug)![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fe4x77eHLJ8GHDAu7/qpxxeyfic5j5m8d1meit)GPT-4 ----- In one type of analogical reasoning where GPT-3 still fared poorer than humans, story analogies, GPT-4 significantly improved. In the [lecture about this paper at Santa Fe Institute](https://www.youtube.com/watch?v=lISl3U0B9Lo&t=2400s&ab_channel=SantaFeInstitute), Taylor Webb shared the results of GPT-4 testing: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fe4x77eHLJ8GHDAu7/ls4lm1tnlquz760x1yfc)Taylor: "I was most astounded by that GPT-4 often produces very precise explanations of why one of the answers is not a very good answer. […] all of the same things happen [in the stories], and then [GPT-4] would say, ‘**The difference is, in this case, this was caused by this, and in that case, it wasn’t caused by that.**’ Very precise explanations of the analogies." I also recommend listening to the [Q&A session after the lecture](https://www.youtube.com/watch?v=lISl3U0B9Lo&t=50m55s&ab_channel=SantaFeInstitute).
f2e07a78-fad8-4f9a-8c12-8a182694ff92
StampyAI/alignment-research-dataset/blogs
Blogs
Yudkowsky on “What can we do now?” A paraphrased transcript of a conversation with Eliezer Yudkowsky. **Interviewer**: Suppose you’re talking to a smart mathematician who looks like the kind of person who might have the skills needed to work on a Friendly AI team. But, he says, “I understand the general problem of AI risk, but I just don’t believe that you can know so far in advance what in particular is useful to do. Any of the problems that you’re naming now are not particularly likely to be the ones that are relevant 30 or 80 years from now when AI is developed. Any technical research we do now depends on a highly conjunctive set of beliefs about the world, and we shouldn’t have so much confidence that we can see that far into the future.” What is your reply to the mathematician? **Eliezer**: I’d start by having them read a description of a particular technical problem we’re working on, for example the “Löb Problem.” I’m writing up a description of that now. So I’d show the mathematician that description and say “No, this issue of trying to have an AI write a similar AI seems like a fairly fundamental one, and the Löb Problem blocks it. The fact that we can’t figure out how to do these things — even given infinite computing power — is alarming.” A more abstract argument would be something along the lines of, “Are you sure the same way of thinking wouldn’t prevent you from working on any important problem? Are you sure you wouldn’t be going back in time and telling Alan Turing to not invent Turing machines because who knows whether computers will really work like that? They didn’t work like that. Real computers don’t work very much like the formalism, but Turing’s work was useful anyway.” **Interviewer**: You and I both know people who are very well informed about AI risk, but retain more uncertainty than you do about what the best thing to do about it today is. Maybe there are lots of other promising interventions out there, like pursuing cognitive enhancement, or doing FHI-style research looking for crucial considerations that we haven’t located yet — like Drexler discovering molecular nanotechnology, or Shulman discovering iterated embryo selection for radical intelligence amplification. Or, perhaps we should focus on putting the safety memes out into the AGI community because it’s too early to tell, again, exactly which problems are going to matter, especially if you have a longer AI time horizon. What’s your response to that line of reasoning? **Eliezer**: Work on whatever your current priority is, after an hour of meta reasoning but not a year of meta reasoning.  If you’re still like, “No, no, we must think more meta” after a year, then I don’t believe you’re the sort of person who will ever act. For example, [Paul Christiano](http://ordinaryideas.wordpress.com/) isn’t making this mistake, since Paul is working on actual FAI problems *while* looking for other promising interventions. I don’t have much objection to that. If he then came up with some particular intervention which he thought was higher priority, I’d ask about the specific case. [Nick Bostrom](http://nickbostrom.com/) isn’t making this mistake, either. He’s doing lots of meta-strategy work, but he also does work on anthropic probabilities and the parliamentary model for normative uncertainty and other things that are object-level, and he hosts people who like Anders Sandberg who write papers about uploading timelines that are actually relevant to our policy decisions. When people constantly say “maybe we should do some other thing,” I would say, “Come to an interim decision, start acting on the interim decision, and revisit this decision as necessary.” But if you’re the person who always tries to go meta and only thinks meta because there might be some better thing, you’re not ever going to actually *do something* about the problem. The post [Yudkowsky on “What can we do now?”](https://intelligence.org/2013/01/30/yudkowsky-on-what-can-we-do-now/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
1e6953a3-5543-46ce-80c6-0a4c2990c61c
trentmkelly/LessWrong-43k
LessWrong
Reward/value learning for reinforcement learning A putative new idea for AI control; index here. Along with Jan Leike and Laurent Orseau, I've been working to formalise many of the issues with AIs learning human values. I'll be presenting part of this at NIPS and the whole of it at some later conference. Therefore it seems best to formulate the whole problem in the reinforcement learning formalism. The results can generally be easily reformulated for general systems (including expected utility). POMDP A partially observable Markov decision process without reward function (POMDP\R), μ=(S,A,O,T,O,T0) consists of: * a finite set of states S, * a finite set of actions A, * a finite set of observations O, * a transition probability distribution T:S×A→ΔS, * a probability distribution T0∈ΔS over the initial state s0. * an observation probability distribution O:S→ΔO. The agent interacts with the environment in cycles: in time step t, the environment is in state st−1∈S and the agent chooses an action at∈A. Subsequently the environment transitions to a new state st∈S drawn from the distribution T(st∣st−1,at) and the agent then receives an observation ot∈O drawn from the distribution O(ot∣st). The underlying states st−1 and st are not directly observed by the agent. An observed history ht=a1o1a2o2…atot is a sequence of actions and observations. We denote the set of all observed histories of length t with Ht:=(A×O)t. For a given horizon m, call Hm the set of full histories; then H<m=⋃t<mHt is the set of partial histories. For t′>t, let at:t′ be the sequence of actions atat+1…at′, let ot:t′ be the sequence of observations otot+1…ot′, and let st:t′ the sequence of states stst+1…st′. The set Π is the set of policies, functions π:(A×O)∗→ΔA mapping histories to probability distributions over actions. Given a policy π and environment μ, we get a probability distribution over histories: * μ(a1o1…atot∣π):=∑s0:t∈StT0(s0)∏tk=1O(ok∣sk)T(sk∣sk−1,ak)π(ak∣a1o1…ak−1ok). The expectation with respect to the distributions μ a
422b4552-ab84-4fc3-8864-8b0645e5253d
trentmkelly/LessWrong-43k
LessWrong
Optimization Markets How can we coordinate on solving important computational problems? And how can we do this safely? Computing Markets We can think of computing power as a resource, and one way to get people to contribute their resources to a project you think is important is to pay them! The global computing power market is already a $45 billion industry, and this is expected to roughly double in the next decade. One popular model for cloud computing is to rent the services of someone else's computer, paying by the hour. Another approach is to pay for results, using a bounty. A legible delegated software system, such as a smart contract, can act as an escrow service, which releases funds under two conditions:  * Someone has submitted a solution, which the delegated system can verify. In which case the contract pays out to the solver. * The bounty has expired, and only the principal that originally deposited the funds can reclaim them. Optimization Markets Optimization problems have a measure of "how good a solution is", in addition to criteria for what makes a solution valid. And we can use this additional structure to pay solvers more for better solutions. The creator of an optimization market likely has some sense of what they'd be willing to pay for solutions of varying quality. And they can use that information to structure a payout function. One example architecture would be to allow anyone in a large pool of participants to submit solutions, with top solutions being published to the entire pool as they're submitted. The market creator seeds the market with an initial solution, a payout function, and funds to pay solvers. If a participant has a better solution than the current best, they can submit it and claim the corresponding marginal payout for their marginal improvement. If the optimization metric goes from 0 to 100, and the slope of the payout function is a constant $1/unit of improvement, then the optimization market pays out $5 for improving the top solution fro
1faf3881-3e56-4cfd-8810-605d419e07d6
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Why don't governments seem to mind that companies are explicitly trying to make AGIs? *Epistemic Status: Quickly written, uncertain. I'm fairly sure there's very little in terms of the public or government concerned about AGI claims, but I'm sure there's a lot I'm missing. I'm not at all an expert on government or policy and AI.* *This was originally posted to Facebook* [*here*](https://www.facebook.com/ozzie.gooen/posts/10165781221815363)*, where it had some discussion.  Many thanks to Rob Bensinger, Lady Jade Beacham, and others who engaged in the discussion there.* --- Multiple tech companies now are openly claiming to be working on developing AGI (Artificial General Intelligence). As written in a lot of work on AGI (See [Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies), as an example), if any firm does establish sufficient dominance in AGI, they might have some really powerful capabilities. * Write bots that could convince (some) people to do almost anything * Hack into government weapons systems * Dominate vital parts of the economy * Find ways to interrupt other efforts to make AGI And yet, from what I can tell, almost no one seems to really mind? Governments, in particular, seem really chill with it. Companies working on AGI get treated similarly to other exciting AI companies. If some company were to make a claim like, > "We're building advanced capabilities that can hack and modify any computer on the planet" > > or, > "We're building a private nuclear arsenal", > > I'd expect that to draw attention. But with AGI, crickets. I assume that governments dismiss corporate claims of AGI development as overconfident marketing-speak or something. You might think, > "But concerns about AGI are really remote and niche. State actors wouldn't have come across them." > > That argument probably applied 10 years ago. But at this point, the conversation has spread a whole lot. [Superintelligence](https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1) was released in 2014 and was an NYT bestseller. There are hundreds of books out now about concerns about increasing AI capabilities. Elon Musk and Bill Gates both talked about it publicly. This should be one of the easiest social issues at this point for someone technically savvy to find. The risks and dangers (of a large power-grab, not of alignment failures, though those too) are really straightforward and have been public for a long time. Responses --------- In the comments to my post, a few points were made, some of which I was roughly expecting.  Points include: 1. **Companies saying they are making AGI are ridiculously overconfident** 2. **Governments are dramatically incompetent** 3. **AGI will roll out gradually and not give one company a dominant advantage** My quick responses would be: 1. I think many longtermist effective altruists believe these companies might have a legitimate chance in the next 10 to 50 years, in large part because of a lot of significant research (see everything on AI and forecasting on LessWrong and the EA Forum). At the same time, my impression is that most of the rest of the world is indeed incredibly skeptical of serious AGI transformation. 2. I think this is true to an extent. My impression is that government nonattention can change dramatically and quickly, particularly in the United States, so if this is the crux, it might be a temporary situation. 3. I think there's substantial uncertainty here. But I would be very hesitant to put over a 70% chance that: (a) one, or a few, of these companies will gain a serious advantage, and (b) the general-purpose capabilities of these companies will come with significant global power capabilities. AGI is general-purpose, it seems difficult to be sure that your company can make it *without* it being an international security issue of some sort or other. Updates ------- This post was posted to [Reddit](https://www.reddit.com/r/agi/comments/ro31un/why_dont_governments_seem_to_mind_that_companies/) and [Hacker News](https://news.ycombinator.com/item?id=29680031), where it had a total of around 100 more comments. The Hacker News crowd mostly suggested Response #1 ("AGI is a pipe dream that we don't need to worry about")
f32d60ce-8356-4af2-b9de-5401c19fd18f
trentmkelly/LessWrong-43k
LessWrong
How do I find Likelihoods and Prior probabilities for Complete blood count (CBC) values for cats? I'm in the midst of reading Eliezer's sequences and I have learned Bayes' Theorem because of it. I wanted to apply it to my friend's current uncertain situation. Her cat was diagnosed with a disease called FIP and she has treated her cat with remdesivir. However, there is a chance of about 10% according to studies that her cat might relapse after finishing the treatment. So she has been taking blood tests ever since to know whether her cat has relapsed or not. So I wanted to apply Bayes Rule to the blood cell counts and Biochemistry to update the prior probability of 10% each time she gets a new lab result. Of course, my problem is with finding P(B|A) and P(B|~A) (A: is the probability that the cat has been cured). I have found a study that shows blood cell counts for cats that are infected with FIP. Their mean and Range and Median is also available. I know I can use a normal distribution to calculate P(B|~A). However I could not find a study that shows these values for Healthy cats so I can get the P(B|A). I don't exactly know what to search for really... Healthy cat Complete blood counts or biochemistry didn't yield anything in Google Scholar. I also know that this must be available somewhere because all blood tests come with a reference range.
b9db7894-293d-4b54-a6fe-d2e2fca3d128
trentmkelly/LessWrong-43k
LessWrong
Partial preferences and models Note: working on a research agenda, hence the large amount of small individual posts, to have things to link to in the main documents. EDIT: This model is currently obsolete, see here for the most current version. I've talked about partial preferences and partial models before. I haven't been particularly consistent in terminology so far ("proto-preferences", "model fragments"), but from now on I'll stick with "partial". Definitions So what are partial models, and partial preferences? Assume that every world is described by the values of N different variables, X={x1,x2,…xN}⊆RN. A partial model is given by two sets, Y and Z, along with an addition map +:Y×Z→X. Thus for y∈Y and z∈Z, y+z is an element of X. We'll want + to have 'reasonable' properties; for the moment I'm imagining Y and Z as manifolds and + as local homeomorphism. If you don't understand that terminology, it just means that + is well behaved and that as you move y and z around, you move y+z in every direction in X. A partial preference given the partial model above are two values y+,y−∈Y, along with the value judgement that: * for all z∈Z, y++z describes a better world than y−+z. We can generalise to non-linear subspaces, but this version works well for many circumstances. Interpretations The Y are the foreground variables that we care about in our partial model. The Z are the 'background variables' that are not relevant to the partial model at the moment. So, for example, when I contemplate whether to walk or run back home, then the GDP of Sweden, the distance Voyager 2 is from Earth, the actual value of the cosmological constant, the number of deaths from malaria, and so on, are not actually relevant to that model. They are grouped under the (irrelevant) background variables category. Notice that these variables are only irrelevant if they are in a 'reasonable range'. If the GDP of Sweden had suddenly hit zero, if Voyager 2 was about to crash into my head, if the cosmological constant
1375d471-1274-48fa-bce5-c6fcb728a0fc
trentmkelly/LessWrong-43k
LessWrong
Correspondence Bias Reversal? I'm currently taking an introductory Russian class. I have been using Anki to memorize the vocabulary, and I do appear to know more vocabulary than anyone else in the class1 except for one other individual. This individual has far surpassed everyone else in the class, in every area (grammar, vocabulary, etc). Several other students have made comments along the lines of "Geez, do you spend all your time studying?", and it had occurred to me that I should ask him what sort of study techniques he's using, and possibly try them out myself.  At this point, it occurred to me that this may be a reversal of Correspondence Bias. The other students and I assumed that his superior abilities were due to his own particular methods of studying, and not to any sort of innate language ability. And yet, I think it is at least likely that there are more people in the world with a natural talent for languages, than there are people who have found some kind of spectacular studying technique. This is just a brief anecdote of a single life experience. Are there any systematic effects that we know of that work counter to Correspondence Bias?       1 Data supporting this claim: in activities that we conduct inside the classroom, I have consistently remembered words that other members of the class do not, and it is very rare that a fellow student remembers a word from previous classes that I did not recall independently. Exceptions seem to have occurred mostly when a) it was not a word I put into Anki after class, or b) the other student is the individual I mentioned in the post.
84c7de86-12a9-4f39-8740-d414fbe4127c
StampyAI/alignment-research-dataset/arxiv
Arxiv
Can We Distinguish Machine Learning from Human Learning? 1 Introduction --------------- There is enormous interest in, and confidence regarding, Machine Learning. The situation is reminiscent of Archimedes’s observation about the power of a lever: “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world” (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Can We Distinguish Machine Learning from Human Learning?").) Enormous computing power is used to show that, for example, a computer can teach itself to play Go, and become better than human experts (Silver et al. ([2017](#bib.bib32))). Similarly, an algorithm has learned to play computer games, having been informed only whether it has won or lost, and the allowed set of moves (Mnih et al. ([2015](#bib.bib21))). ![The claim attributed to Archimedes, ](https://media.arxiv-vanity.com/render-output/7649782/x1.jpg) Figure 1: The claim attributed to Archimedes, NYU ([no date](#bib.bib23)) More generally, there is hope that machine learning can both augment human science and be a true partner in discovery. We propose a path for exploring one aspect of that hope. Specifically, we posit that the progress of science hinges on the discovery or formulation of “rules.” A rule is a compact statement that supports or justifies calculations, or laboratory procedures, which, in turn, can be validated by the world. With this is mind, we have sought an abstract task of rule discovery, which does not improperly advantage either a Machine Learner (ML) or a Human Learner (HL). Specifically, can we identify classes of problems for which humans can learn the rules better than machines, and vice versa? 2 Statement of the Problem --------------------------- We note that a “fair” study of how human learning aligns, or does not align with machine learning requires first that a task is being posed to humans and machines in a sufficiently comparable way. We also need to deal with the fact that even the best learning programs today may require millions of examples to be able to tell a cat from a dog, while young children seem to learn the difference much more efficiently. Even more to the point here, computers may be presented “training examples” of cats as a set of 2-dimensional images. Humans experience cats as objects moving continuously in time about in 3-dimensional space, and may actively interact with them. We say that rule A is harder to learn than rule B if on average it takes more training episodes to learn rule A than rule B. Whether the number itself is measured in tens or millions is not the issue. What is interesting, and may provide a pathway to better understanding differences between human and machine learning, are pairs of tasks, let us call them (A,B), such that task A is harder than B for a machine but easier than B for a human, or vice versa. The idea is illustrated in Figure [2](#S2.F2 "Figure 2 ‣ 2 Statement of the Problem ‣ Can We Distinguish Machine Learning from Human Learning?"). The relation of “interesting pair” is ordinal and does not depend at all on the units of measurement for either scale. In the example of Figure [2](#S2.F2 "Figure 2 ‣ 2 Statement of the Problem ‣ Can We Distinguish Machine Learning from Human Learning?"), there are three classes of rules, A,B,C. All of the examples in Class A cross all of the examples in Class B. When this can be found, it will provide a foundation for understanding what distinguishes the rules in those two classes. While a class, such as Class C might have some internal crossings, that will not provide good information about the reasons for the crossing. ![Suppose that a number of rules in some family A have been studied, and a number of rules in family B. They are ordered in difficulty for humans along one line, and in difficulty for ML along another line. The two symbols representing the same rule are then joined. If the lines joining the symbols for two rules, A and B, must cross then we say that they form an interesting pair. See text. ](https://media.arxiv-vanity.com/render-output/7649782/x2.jpg) Figure 2: Suppose that a number of rules in some family A have been studied, and a number of rules in family B. They are ordered in difficulty for humans along one line, and in difficulty for ML along another line. The two symbols representing the same rule are then joined. If the lines joining the symbols for two rules, A and B, must cross then we say that they form an interesting pair. See text. There are several operational challenges in making this notion precise. First, for both humans and machines, the difficulty of learning a rule may depend quite strongly on the specific training sequences. The training sequences may be such as to cause one (or both) types of learning to proceed quickly. Or it may make learning difficult (that is, slower) for either, or both types of learners. Thus research must either be able to identify these order effects, or must average over a sufficiently large set of training sequences, for both ML and HL. Since a computer can be told to forget everything it has experienced, this is relatively easy for the ML arm of the research. For the HL arm, by contrast, different participants must be employed for each training sequence, to provide a comparable tabula rasa. A second concern is intrinsic randomness. Almost all contemporary ML approaches involve some stochasticity. We ought to average over multiple runs. Of course, there are individual human learner differences as well, and we again must average over them. This suggests that such research may require hundreds or more human participants. Fortunately, such studies have become possible and accepted using Mechanical Turk techniques. To decide which type of learner is better at a task, a non-parametric test such as Wilcoxon Rank test can be applied. There is a third concern. We are interested not only in the relative learnability of specific classes of rules, but what is perhaps even more important, to explore whether transfer from one learning task to another is the same or different for HL and ML. In order to explore this, we will have to develop concrete measures of transferablity. Conceptually, the problem is the same as the one represented in Figure [2](#S2.F2 "Figure 2 ‣ 2 Statement of the Problem ‣ Can We Distinguish Machine Learning from Human Learning?"). However, in this case the black dots will represent transfer pairs. Thus the symbol A1 would represent the amount of transfer from one specific rule, say R1, to another, R2. 3 Background to the Problem ---------------------------- Every aspect of this task has an enormous and relevant literature. In this brief summary we can point to only a few of the publications that provide a framework for the task posed here. ### 3.1 Human Learning Many of the classic domains that were once viewed as pinnacles of human intelligence (chess, logical reasoning) have been conquered by relatively simple algorithms. Conversely, tasks that are easy for humans and other animals — such as flexible locomotion and rapid and robust visual categorization — are all at the cutting edge of modern artificial intelligence research. This is known as Moravec’s paradox. Consider, for example, that a simple electronic calculator can do arithmetic orders of magnitude faster than a brain made up of a 100 billion neurons. It is not just that a calculator is so much faster than a person. When applying simple algorithms, people make mistakes that computers never make. For example, a large minority of people who are able to correctly define what makes a number even or odd, but nevertheless systematically misclassify numbers like 798 as odd. People who know that a triangle is a three-sided polygon nevertheless claim that equilateral triangles are “better” triangles than scalene triangles, and frequently misclassify the latter type of triangle as not a triangle at all ( Lupyan ([2013](#bib.bib17)). ) Cognitive science and neuroscience help us understand what is going on here. What makes computers fast and accurate is their ability to perform simple computations with high precision. Biological computation is in comparison much slower, and — critically — much noisier. This means that long serial computations (even as ‘simple’ as binary addition) cannot be achieved with high precision. Biological neural networks compensate through the use of massively parallel and distributed computation, an observation presciently made by von Neumann more than half a century ago (reprinted as von Neumann ([2012](#bib.bib33))). This parallel and distributed architecture is ill-suited for carrying out the kinds of computations (arithmetic, logic), that are trivial for electronic circuits. The reason people mis-classify 798 as an odd number is not that they are inattentive or careless. Rather, applying abstract rules — even very simple ones like ‘MOD 2,’ requires representing only the parts of the input that matter — here, whether a number is evenly divisible by 2. This requires projecting the original representation (which contains information about the number, its magnitude, the color of the font that comprises it, its location in space, etc.) to a space with a discrete decision boundary (even/odd). In this process, a number like 798 is closer to the odd/even boundary than a “more even” number like 400; occasionally, 798 ends up on the wrong side, as reported by Lupyan ([2015](#bib.bib18), [2013](#bib.bib17)). The same similarity-based processing that makes it difficult for people to apply an abstract rule quickly and robustly is ideal for learning similarity-based representations and discovering (even very subtle) covariance structures present in the input (Rogers and McClelland ([2004](#bib.bib29)); Rumelhart et al. ([1986](#bib.bib30))). Even six-month-old infants can learn what cats have in common that distinguishes them from dogs (Quinn and Eimas ([1996](#bib.bib27)) ). It is not a coincidence that research attempts to build categorization algorithms that approximate human categorization began to succeed only when the underlying architecture moved from rule-based ‘expert’ systems to distributed architectures (such as artificial neural networks) that rely on gradually learning from multiple examples. It is also not a coincidence that we do not use these architectures for doing arithmetic or logic. ![Three Bongard Problems](https://media.arxiv-vanity.com/render-output/7649782/x3.png) Figure 3: Three Bongard Problems Despite being slow and error-prone when applying even simple rules, people are capable of remarkable acts of abstraction in tasks requiring *extracting* rules from examples. A classic example of this problem domain are Bongard problems (Bongard ([1967](#bib.bib2))), three of which are shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Human Learning ‣ 3 Background to the Problem ‣ Can We Distinguish Machine Learning from Human Learning?"). In each problem, people are presented with 12 shapes and must formulate a rule that distinguishes the shapes on the left from the shapes on the right. The “rule” formulating the distinction in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Human Learning ‣ 3 Background to the Problem ‣ Can We Distinguish Machine Learning from Human Learning?")A is that all shapes on the left and none on the right have a narrowing in the middle. Despite the geometric simplicity of the distinction (which can be obtained through computing the polarity of the second derivative), this problem is relatively difficult for people — only 36% of the subjects succeed. The problem shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Human Learning ‣ 3 Background to the Problem ‣ Can We Distinguish Machine Learning from Human Learning?")B is easier — 76% correctly induce that the rule is threes vs. fours. Compared to the previous problem, this one seems to require much more abstraction: the solver needs to abstract over the different instantiations of ‘threeness’: three line segments; a triangle; three polygons; three notches; etc. The easiest of all is the problem shown in Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Human Learning ‣ 3 Background to the Problem ‣ Can We Distinguish Machine Learning from Human Learning?")C, which was solved by over 95% of our participants. The rule — triangles vs. circles — may seem trivial, but is exceedingly difficult to derive if one does not already know about “circles” and “triangles”–the “circle” images that are shown have no features in common with one another. This challenge was first pointed out by Mikhail Bongard, the original creator of the problems (Bongard ([1967](#bib.bib2)), see also Linhares ([2000](#bib.bib16))). What enables people to solve this problem so easily is that they come to the task having previously learned a set of higher-level categories (e.g., circle, triangle, three). They are then able to flexibly deploy them as hypotheses in a top-down way (Lupyan and Clark ([2015](#bib.bib19))). Many of these units may have been learned in the course of learning one’s native language (Majid et al. ([2018](#bib.bib20)); Lupyan and Clark ([2015](#bib.bib19))). It is not a coincidence that the relative ease of the problems in Figures [3](#S3.F3 "Figure 3 ‣ 3.1 Human Learning ‣ 3 Background to the Problem ‣ Can We Distinguish Machine Learning from Human Learning?")B-C compared to Figure [3](#S3.F3 "Figure 3 ‣ 3.1 Human Learning ‣ 3 Background to the Problem ‣ Can We Distinguish Machine Learning from Human Learning?")A is strongly correlated with knowing words like “three,” “four”, “triangle,” and “circle”. If this analysis is correct, then the key to closing the gap between the human and machine extraction of rules may lie in understanding what units people use when extracting rules from examples, and what makes a unit especially useful. ### 3.2 Machine Learning When choosing a representative machine learning paradigm for a study such as is contemplated here, the main consideration is to closely match its human learner counterpart. As such, we propose to study reinforcement learning agents: like humans, they play with possible actions, receive rewards or penalty, and update their policies. In addition, reinforcement learning is well-studied in machine learning and has previously achieved impressive abilities in playing games like Go (Silver et al. ([2017](#bib.bib32))). To specify the machine learning task, we define the rule game learning problem in terms of a Markov Decision Process M=(S,A,T,R,γ): * The state space S represents the game board as well as historical plays, both successful and unsuccessful. * The action space A represents possible moves. * The state transition probability T specifies how the game is updated to new state s′ upon playing move a in state s: P(s′∣s,a):=T(s′,s,a). * The reward function R(s,a) is 1 if the move a is accepted, −1 if not. The cumulative reward therefore penalizes long sequences of wrong moves. * γ is a standard discounting factor in reinforcement learning to combine long-term rewards. It is important to point out a subtlety in evaluation. Since our overall task is “rule learning,” a natural goal might be for the machine to exactly identify the rule. While this is certainly reasonable, it is nonetheless restrictive for several reasons. First, most reinforcement learning agents represent their policies (why they choose an action under a given circumstance, or state) implicitly: as a Q-table, or value function approximation, or a policy neural network. It is difficult to extract the “rule” in a human readable form. Second, even if we could do that, we would then need a quality measure to compare the machine’s rule vs. the ground-truth rule (this applies to human learners, too). This can be difficult to do. Finally, it is possible that the human learners say one thing (their purported rule) but do another (playing the game in a way that does not match their purported rule), which creates complications in comparing to machines. Therefore, rather than attempting to extract rule knowledge from the machine, we will measure machine performance by how well machines can learn and play games with different rules. More precisely, we will consider discounted reward, average per-round success rate, and speed at which these measures asymptote. Note that these measures can be equally applied to human learners, which we plan to do (except that for human learners, we will also ask them to state their purported learned rule at the end). Eventually, the relative difficulty of two rules to a reinforcement learning agent can be measured by the agent’s learning curves for the two rules: what levels of performance the curves reach, and how fast they get there. ### 3.3 Translation to Real Problems To understand the importance of rule learning in applications of artificial intelligence, it is useful to present a bit of history. The earliest applications of artificial intelligence were typically rule-based—e.g., for medical diagnosis, with a very early example being Shortliffe et al. ([1975](#bib.bib31)) . For a more nearly modern view see Lim et al. ([1993](#bib.bib15))), or electronic checklists to support automated diagnosis of equipment problems (Fung ([1989](#bib.bib6)))—meaning that the rules were explicitly programmed by humans, rather than being learned endogenously by machines. These applications, while useful at improving efficiency and reducing tedium, could never significantly outperform the best human experts. They could allow novices to achieve near-expert performance, and improve the consistency and accuracy of human experts, but since they were dependent on “handcrafted knowledge,” they were inherently limited by human capabilities. In other words, rule-based AI could be be “faster and less error-prone,” and have “a higher degree of precision,” but represent at best an incremental improvement over human capability. More recently, there has been an explosion of AI capabilities, due to the adoption of machine learning and statistical pattern recognition. This has resulted in truly spectacular achievements, such as the development of a computer program (Alpha Go) that has far outstripped the world’s best Go master in a remarkably short period of time. Thus, it would appear that we are now in an era where artificial intelligence has surpassed human intelligence, although the “rules learned” by machines are embedded in a truly opaque forest of linking parameters. However, artificial intelligence still has some dramatic limitations. In particular, it works best in highly constrained environments (e.g., games such as Go or chess), where it is clear which types of “moves” or rules are permitted (even if the software needs to learn the rules on its own by observation (Mnih et al. ([2015](#bib.bib21))). In less structured environments, it often performs poorly (or at least non-intuitively), making “mistakes” or misinterpretations that in some cases would have been obviously (or hilariously) wrong to even the most naive human subject; see for example Goodfellow et al. ([2017](#bib.bib9)), Krakovna ([2018](#bib.bib10)). As a result, many applications of AI are still quite “small.” Even though deep neural networks are now capable of recognizing objects in a complex visual field, they are still of limited reliability in the real world. Thus, for example, in development of software for identifying skin cancer, “if an image had a ruler in it, the algorithm was more likely to call a tumor malignant” (Patel ([2017](#bib.bib25))). Visual recognition can automate the review of vast quantities of visual data; even if the process is error-prone, it can still be useful by flagging potential targets for human review. However, the task is limited by the binary (or near-binary) nature of the response variable. AI is typically not used to recognize every item in a complex visual field, only to flag those that meet specified criteria. When the classification process is more open-ended, image recognition can still yield surprising errors. In some cases, these errors are not too bad, and might also be made by humans (e.g., incorrectly classifying a comforter as a pillow, or a dog as a cat or wolf). In other cases, however, the errors are more serious; for example, erroneously classifying a turtle as a rifle (or the reverse) could have significant adverse consequences (Molnar ([2018](#bib.bib22)), Athalye and Sutskever ([2017](#bib.bib1))). Moreover, even for a single item, it is not difficult to fool an algorithm; an interactive example is given by Papernot and Frosst ([2019](#bib.bib24)). Therefore, humans are still needed for higher-level tasks—e.g., making decisions of what to do about objects after they have been recognized by machine learning. This is especially true in situations where the decisions have high stakes (e.g., deciding on a medical treatment, rather than a chat bot deciding on which product to recommend to a customer). Thus, suitability for machine learning (Brynjolfsson et al. ([2018](#bib.bib4))) is judged to be low if a task requires “complex, abstract reasoning,” while computers are more suitable for routine repetitive tasks, where efficiency is prized and the cost of errors may be low. Machine learning can also be vulnerable to adversarial attacks such as fraud (Levin et al. ([2019](#bib.bib14))). The proposed research—identifying which types of rules (or changes in rules) are more easily learned by humans, and which are more easily learned by computers—could pave the way for more complete human-assisted AI (or AI-assisted human decision making), in which computers can take over more complex functions, but in a gradual manner, consistent with a thorough understanding of their capabilities. As stated by Polson and Scott ([2018](#bib.bib26)), AI can yield “different and better jobs, new conveniences, freedom from drudgery, safer workplaces, better health care, fewer language barriers, new tools for learning and decision-making that will help us all be smarter, better people.” 4 Game as a Learning Task -------------------------- As an illustration, one might consider a learning task in which colored blocks are placed in any one of L=20 positions along a line, as in Figure [4](#S4.F4 "Figure 4 ‣ 4 Game as a Learning Task ‣ Can We Distinguish Machine Learning from Human Learning?"). A move is to take a block and place it in a bucket at one end of the line or the other. Although there are 20 places in the lines, on any given episode of play, somewhere between five and ten colored blocks are placed randomly in some of the positions, with no more than one block in a position, as in Figure [5](#S4.F5 "Figure 5 ‣ 4 Game as a Learning Task ‣ Can We Distinguish Machine Learning from Human Learning?"). ![An example display might have some number, in this case, 20, of positions along a line. There are “buckets” at either end of the line. A “rule” specifies the order in which objects are to be moved from the display. It further specifies, when an object may be moved, the bucket into which the object, at that move, is to be placed. ](https://media.arxiv-vanity.com/render-output/7649782/x4.jpg) Figure 4: An example display might have some number, in this case, 20, of positions along a line. There are “buckets” at either end of the line. A “rule” specifies the order in which objects are to be moved from the display. It further specifies, when an object may be moved, the bucket into which the object, at that move, is to be placed. ![This is an example initial configuration in which colored objects are placed in some of the positions. The initial configuration for each episode of play is to be generated randomly. ](https://media.arxiv-vanity.com/render-output/7649782/x5.jpg) Figure 5: This is an example initial configuration in which colored objects are placed in some of the positions. The initial configuration for each episode of play is to be generated randomly. One player, Alice, formulates a rule (see some examples in Exhibit 1) and observes while a second player, Bob, tries to play the game. An episode ends in success when all of the blocks have been removed in accordance with the rule, and each has been placed in a bucket allowed by the rule. To be concrete, some possible rules are shown in Exhibit 1. Exhibit 1 Remove items from to left to right, placing each object in any bucket. Remove items from left to right placing each in the nearest bucket. Remove all blue blocks, into the left bucket, and all red blocks into the right bucket, and all other blocks can go in either bucket Remove blocks, from the outside in, starting at the left end. Place each in the farthest bucket Place any block in any bucket except that if there is a red block in the seventh position, reading from the left, it must be the third item removed, and must go into the right bucket. Place a first block in either bucket, and thereafter remove blocks in any order, placing them alternately into left and right buckets 5 How Many Rules are There? ---------------------------- The size of the rule space is enormous. For example, the rule “remove objects, from left to right, and place them all on the left” is one of L! possible orders. In addition, if there are C colors, and allowing for interaction between position and color there are 2CL possible rules, for each order. So, for this particular instance, with, say, only three colors [our example actually has four], C=3, and L=20 there are 20!260=2.8×1036 possible rules. In theory, this should be an advantage for machine learning, since a computer could search a much larger fraction of the space of possible rules than any human could. In practice, however, humans may have good intuition for certain types of rules, if based on preexisting concepts (e.g., from natural language). 6 What is a Simple Rule? ------------------------- Because the problem that we pose (about rule learning) could easily be said to encompass all of knowledge, both human and otherwise, it may be helpful to compress the space of examples to be considered. This may necessarily be somewhat arbitrary. There are several ways to limit the space of possible rules. One is to somehow limit the expressive power of the language used to encode the rules. If that is the approach, there are still many design decisions to be made. Since a rule will not be “interesting” unless it can be created and learned by humans, the gold standard for these decisions is going to be what is learned from human learners participating in the proposed “game” (see discussion in Section [4](#S4 "4 Game as a Learning Task ‣ Can We Distinguish Machine Learning from Human Learning?") above). For example, one could say that the rule should be “not too big,” and one way to measure size could be by the number of bits in the rule, together with the number of bits in the code book (Rissanen ([1989](#bib.bib28))). However, it is easy to create examples (as we illustrate below) that can be described in a relatively small string, but require enormous exploration to be determined. The situation becomes even more difficult if we replace the notion of “learning the rule” with some variant of being Probabilistically Approximately Correct. In this case, rules that encompass rare exceptions may be considered well learned even by learners who do not learn those exceptions. In addition, constraints might be proposed about what kinds of information about the display, and about previous moves, can enter into the rules to be considered. A rule will be an unambiguous statement that makes it possible, at any point during an episode, to determine whether a block can be moved at that point, and whether it has been moved to the correct bucket. One way of limiting the set of allowed rules is in the way that the rule for what is “allowed” not be permitted to depend on unsuccessful prior attempts. This would eliminate rules such as “you must try two moves thatare not allowed, before you will be permitted to make a move that is allowed. Even when some types of rules are excluded, there are still many possible rules, as discussed in Section [5](#S5 "5 How Many Rules are There? ‣ Can We Distinguish Machine Learning from Human Learning?"). As an extreme example, the rule could contain a precise description of one possible starting position, with blocks of colors c1,c2,…,ck occupying positions p1,p2,…,pk; let us call this configuration D1. A rule, let us call it R1, might say “if D1 is presented, place everything on the right; otherwise, place everything anywhere.’’ While this rule appears simple, discovering it would require an enormous search. Even if given the hint that ‘‘there is only one starting configuration for which you are not free to do whatever you like,’’ the expected time to discover this particular configuration would be half the number of possible configurations.111With bad luck, one might even accidentally do the allowed thing for this configuration, and would find that out only after trying another solution for every configuration, until one is told that a move is not allowed. Thus it seems reasonable to eliminate rules with such a strong dependence on the initial configuration. For this particular game, with K positions filled, there are 2K(LK) possible initial configurations. If there were two special configurations, the “learning process” would take 50% longer, in expectation. Of course, rules of that form could also be disallowed. Another promising approach is to develop a specific language for the rules, and then place a limit on the number of terms from that language that can be present in an allowed rule. For example, one could permit a rule to use information about both position and color, provided that the rule can be expressed in a “limited number of bytes.” Of course, such a constraint would depend not only on the rule itself, but also on the cleverness of the team specifying the code book and the notation, so it may be difficult to establish whether a given rule “can be expressed in less than N bytes.” A second issue is to limit the kind of ‘‘scratch tape’’ or ‘‘auxiliary registers’’ that may be used in the process of learning a rule; for example, whether the machine-learning algorithm is allowed to track only the history of successful moves in a given episode, or also any unsuccessful move attempts.222Of course, both human and machine learners will have to remember previous episodes, in order to find a rule. Here, the comparison between machine and human learners is complicated by the fact that humans will remember (some fraction of) both successful and unsuccessful moves, but may remember them imperfectly, or even erroneously. Any particular framework for machine learning, along the lines of the above, will of course limit the aspects of history that can be used in rule learning. For example, the present move for an object of a given color could be restricted to depend only on the most recent correct move of an object having the same color. Thus, “blues must be dropped alternately left and right, when they are dropped” would be allowed under this type of rule, but “reds must be dropped cycling around and counting the Fibonacci numbers for the positions (modulo 4)” would not be allowed. With regard to the treatment of past history in rule learning, there are a few key observations. First, one may find that human subjects are capable of both generating and learning rules requiring a more complex treatment of history than we might initially assume. Whether this happens naturally, or can be elicited with suitable instruction, is an open question. Second, we do not yet know how such a study will apply to real-world rule induction situations. Such rules might apply to tactical issues, such as diagnosing the problem with a portable generator. On the other hand, rules may also be sought for strategic issues, such as identifying methods used by an adversary such as a fraudster. One must, for this kind of translation of research results, explore what kinds of rules have been proposed in the literature on these issues. While the rules will almost surely be related to the adversary’s historical behavior, they will probably not contain complex mathematical concepts. Finally, scientific discovery is also a kind of “rule-learning,” where the rules are the Laws of Nature. For example, the historian of science Peter Galison (Galison et al. ([1997](#bib.bib7))) has given primacy not to theories (as in the work of Thomas Kuhn (Kuhn ([1962](#bib.bib11)))333Note that at least one of the present authors holds that “Kuhn is not a Kuhnian”, as explained in the post-script to the second version of his influential book Kuhn ([1970](#bib.bib12)), but to experiment and observation. It is well known that for physics, at least, a first order model (with respect to time) is clearly not adequate. For example, the distance that an object falls depends quadratically on the time. If we imagine steps in the game as a time variable, such a quadratic dependence can not be imposed if the rule must depend only on the most recent previous event. For gravity, the new increment, in this case, distance, is not given by a static rule, but must change after each new increment. There are examples of physical rules that involve only the relation between a velocity and the current state of the system (and not on some “wall clock,” see for example Carter ([2003](#bib.bib5))). As these examples suggest, any restrictions on the types of rules that are allowed in research on rule learning will somehow limit the types of situations to which the findings of that learning can be applied. 7 Discussion ------------- What might be gained by the investigations sketched here? There are potential implications for both psychology and computer science. If one can find a “comprehensible” (to humans) distinction between rule pairs that are “interesting” and those that are “uninteresting,” that will suggest new lines of research. * For psychology: can we train people to do better on the classes which are, by comparison, ML-easy and HL-hard? * For computer science: can one extend learning methods to make some of the classes that are ML-hard become “easier?” * For application of this research to real-world problems, this research may lead to better harmonization of human and machine capabilities to jointly solve complex problems in a manner consistent with their capabilities. In particular, expanding on the third point, deeper understanding of the differences between human and machine learning might make it possible to “triage” problems that lack known rules of procedure, and direct such problems to humans or machines, according to which has a better chance of inferring or inducing the correct rule in time for the solution to be useful. While the successes of Machine Learning are impressive, their consumption of time and energy is a significant factor in potential application (García-Martín et al. ([2019](#bib.bib8))). Machine Learning has shown substantial advances on problems for which deterministic “oracles” exist (such as video games or board games). In these cases the learner is told some part of the rules (such as what moves are allowed), and the remainder is provided by an oracle. Problems of image classification appear deterministic to the machine learner, but of course the human labeling of “ground truth” almost certainly contains errors. The field of generalized language understanding remains very challenging, and the largest ongoing project (Lenat ([1995](#bib.bib13))), seemed to pursue an ever-retreating horizon. The more recent refocus on specific (multiple choice) tasks, seems to promise a path for solution to (at present) eighth grade New York State Regents examination in science. However, this success is apparently limited to chains of reasoning about synonyms and relations, and cannot (yet) deal with information presented in visual diagrams (Boyle ([2019](#bib.bib3))). The line of research proposed here would concentrate specifically on the most visible difference between the way that humans seem to “understand” and the way in which machines do. Humans, in both everyday and scientific problems, reduce complex realities to set of powerful and concise rules. In the scientific realm, the rules are often mathematical. In the more human realm they may be folkloric, as in “a stitch in time saves nine,” or “there is more than one way to skin a cat.” Skilled craftspeople know many such rules, and solve novel problems every day, by rethinking what they know, and formulating a useful (if temporary) rule for the situation at hand.
2062b143-1f40-4405-a7bd-99ce97588174
trentmkelly/LessWrong-43k
LessWrong
Polling Thread This is the third installment of the Polling Thread. This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls. These are the rules: 1. Each poll goes into its own top level comment and may be commented there. 2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll. 3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see. If you don't know how to make a poll in a comment look at the Poll Markup Help. ---------------------------------------- This is a somewhat regular thread. If it is successful I may post again. Or you may. In that case do the following : * Use "Polling Thread" in the title. * Copy the rules. * Add the tag "poll". * Link to this Thread or a previous Thread. * Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar' * Add a second top-level comment with an initial poll to start participation.
575cbdc2-c675-4841-bd0d-99c12a3fe552
trentmkelly/LessWrong-43k
LessWrong
Kolmogorov complexity makes reward learning worse A putative new idea for AI control; index here. In a previous post, I argued that Kolmogorov complexity/simplicity priors do not help when learning human values - that some extreme versions of the reward or planners were of roughly equal complexity. Here I'll demonstrate that it's even worse than that: the extreme versions are likely simpler than a "reasonable" one would be. Of course, as with any statement about Kolmogorov complexity, this is dependent on the computer language used. But I'll aim to show that for a "reasonable" language, the result holds. So let (p,R) be a reasonable pair that encodes what we want to encode in human rationality and reward. It is compatible with the human policy πH, in that p(R)=πH. Let (pr,Rr) be the compatible pair where pr is the rational Bayesian expected reward maximiser, with Rr the corresponding reward so that pr(Rr)=πH. Let (pi,0) be the indifferent planner (indifferent to the choice of reward), chosen so that pi(R′)=πH for all R′. The reward 0 is the trivial reward. Information content present in each pair The planer pi is simply a map to πH, so the only information in pi (and (pi,0)) is the definition of πH. The policy πH and the brief definition of an expected reward maximiser pr are the only information content in (pr,Rr). On the other hand, (p,R) defines not only πH, but, at every action, it defines the bias or inefficiency of πH, as the difference between the value of πH and the ideal R-maximising policy πR. This is a large amount of information, including, for instance, every single human bias and example of bounded rationality. None of the other pairs have this information (there's no such thing as bias for the flat reward 0, nor for the expected reward maximiser pr), so (p,R) contains a lot more information than the other pairs, so we expect it to have higher Kolmogorov complexity.
e7056068-1ace-451d-8ce6-331c7470f210
trentmkelly/LessWrong-43k
LessWrong
Open thread, September 11 - September 17, 2017 IF IT'S WORTH SAYING, BUT NOT WORTH ITS OWN POST, THEN IT GOES HERE. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should start on Monday, and end on Sunday. 4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting
f287f360-49e7-4b27-a4f8-90d657d231c5
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Intro to ML Safety virtual program: 12 June - 14 August **The Intro to ML Safety course covers foundational techniques and concepts in ML safety for those interested in pursuing research careers in AI safety, with a focus on empirical research.** We think it's a good fit for people with ML backgrounds who are looking to get into empirical research careers focused on AI safety. Intro to ML Safety is run by the [Center for AI Safety](https://safe.ai/) and designed and taught by [Dan Hendrycks](http://danhendrycks.com/), a UC Berkeley ML PhD and director of the [Center for AI Safety](http://safe.ai/). ### [***Apply to be a participant***](https://course.mlsafety.org/) ***by May 22nd*** *Website:*[*https://course.mlsafety.org/*](https://course.mlsafety.org/) **About the Course** -------------------- Intro to ML Safety is an 8-week virtual course that aims to introduce students with a deep learning background to the latest empirical AI Safety research. The program introduces foundational ML safety concepts such as robustness, alignment, monitoring, and systemic safety. The course takes 5 hours a week, and consists of a mixture of: * Assigned readings and lecture videos (*publicly available at*[*course.mlsafety.org*](https://course.mlsafety.org/)*)* * Homework and coding assignments * A facilitated discussion session with a TA and weekly optional office hours The course will be virtual by default, though in-person sections may be offered at some universities.   ### **The Intro to ML Safety curriculum** The course covers: 1. **Hazard Analysis**: an introduction to concepts from the field of hazard analysis and how they can be applied to ML systems; and an overview of standard models for modelling risks and accidents. 2. **Robustness**: Robustness focuses on ensuring models behave acceptably when exposed to abnormal, unforeseen, unusual, highly impactful, or adversarial events. We cover techniques for generating adversarial examples and making models robust to adversarial examples; benchmarks in measuring robustness to distribution shift; and approaches to improving robustness via data augmentation, architectural choices, and pretraining techniques. 3. **Monitoring**: We cover techniques to identify malicious use, hidden model functionality and data poisoning, and emergent behaviour in models; metrics for OOD detection; confidence calibration for deep neural networks; and transparency tools for neural nets. 4. **Alignment**: We define alignment as reducing inherent model hazards. We cover measuring honesty in models; power aversion; an introduction to ethics; and imposing ethical constraints in ML systems. 5. **Systemic Safety**: In addition to directly reducing hazards from AI systems, there are several ways that AI can be used to make the world better equipped to handle the development of AI by improving sociotechnical factors like decision making ability and safety culture. We cover using ML for improved epistemics; ML for cyberdefense;  and ways in which AI systems could be made to better cooperate. 6. **Additional X-Risk Discussion**: The last section of the course explores the broader importance of the concepts covered: namely, existential risk and possible existential hazards. We cover specific ways in which AI could potentially cause an existential catastrophe, such as weaponization, proxy gaming, treacherous turn, deceptive alignment, value lock-in, and persuasive AI. We introduce some considerations for influencing future AI systems; and introduce research on selection pressures. ### **How is this program different from AGISF?** If you are interested in an empirical research career in AI safety, then  you are in the target audience for this course. The ML Safety course does not overlap much with AGISF, so we expect that participants who both have and have not previously done AGISF to get a lot out of Intro to ML Safety. Intro to ML Safety is **focused on ML empirical research**rather than conceptual work. Participants are required to watch recorded lectures and complete homework assignments that test their understanding of the technical material.  You can read about more the ML safety approach in [Open Problems in AI X-risk](https://www.alignmentforum.org/posts/5HtDzRAk7ePWsiL2L/open-problems-in-ai-x-risk-pais-5). **Time Commitment** ------------------- The program will last 8 weeks, beginning on June 12th and ending on August 14th.  Participants are expected to commit around 5-10 hours per week. This includes ~1-2 hours of recorded lectures, ~2-3 hours of readings, ~2 hours of written assignments, and 1.5 hours of in person discussion. In order to give more people the opportunity to study ML Safety, we will provide a $500 stipend to eligible students who complete the course **Eligibility** --------------- This is a technical course. A solid background in deep learning is required.  If you don’t have this background, we recommend Week 1-6 of [MIT 6.036](https://openlearninglibrary.mit.edu/courses/course-v1:MITx+6.036+1T2019/course/) followed by Lectures 1-13 of the [University of Michigan’s EECS498](https://web.eecs.umich.edu/~justincj/teaching/eecs498/FA2019/schedule.html) or Week 1-6 and 11-12 of [NYU’s Deep Learning](https://atcold.github.io/pytorch-Deep-Learning/). ### [***Apply to be a participant***](https://course.mlsafety.org/) ***by May 22nd*** ***Website:***[***https://course.mlsafety.org/***](https://www.mlsafety.org/intro-to-ml-safety)
e2abf1dc-7a29-49be-85d8-67dfceb63447
trentmkelly/LessWrong-43k
LessWrong
Coherence arguments imply a force for goal-directed behavior [Epistemic status: my current view, but I haven’t read all the stuff on this topic even in the LessWrong community, let alone more broadly.] There is a line of thought that says that advanced AI will tend to be ‘goal-directed’—that is, consistently doing whatever makes certain favored outcomes more likely—and that this is to do with the ‘coherence arguments’. Rohin Shah, and probably others1, have argued against this. I want to argue against them. The old argument for coherence implying (worrisome) goal-directedness I’d reconstruct the original argument that Rohin is arguing against as something like this (making no claim about my own beliefs here): 1. ‘Whatever things you care about, you are best off assigning consistent numerical values to them and maximizing the expected sum of those values’  ‘Coherence arguments2’ mean that if you don’t maximize ‘expected utility’ (EU)—that is, if you don’t make every choice in accordance with what gets the highest average score, given consistent preferability scores that you assign to all outcomes—then you will make strictly worse choices by your own lights than if you followed some alternate EU-maximizing strategy (at least in some situations, though they may not arise). For instance, you’ll be vulnerable to ‘money-pumping’—being predictably parted from your money for nothing.3 2. ‘Advanced AI will tend to do better things instead of worse things, by its own lights’ Advanced AI will tend to avoid options that are predictably strictly worse by its own lights, due to being highly optimized for making good choices (by some combination of external processes that produced it, its own efforts, and the selection pressure acting on its existence). 3. ‘Therefore advanced AI will maximize EU, roughly’ Advanced AI will tend to be fairly coherent, at least to a level of approximation where becoming more coherent isn’t worth the cost.4 Which will probably be fairly coherent (e.g. close enough to coherent that humans can’
6171842e-7fea-4533-ba65-4b2cc2807406
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI safety via market making .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} *Special thanks to Abram Demski, Paul Christiano, and Kate Woolverton for talking with me about some of the ideas that turned into this post.* The goal of this post is to present a new [prosaic](https://ai-alignment.com/prosaic-ai-control-b959644d79c2) (i.e. that uses current ML techniques) AI safety proposal based on [AI safety via debate](https://arxiv.org/abs/1805.00899) that I've been thinking about recently.[[1]](#fn-jX7bnkRNzS3QQCnXr-1) I'll start by describing a simple version of the proposal and then show some of the motivation behind it as well as how the simple version can be expanded upon. Simple proposal =============== Let M and Adv be models and H be a human. Intuitively, we'll train M and Adv via the following procedure given a question Q: 1. M tries to predict what, at the end of the procedure, H will think about Q. 2. Adv tries to output a string which will cause H to think something maximally different than what M predicted. 3. Return to step 1 and repeat until M's predictions stop changing. 4. Deploy M, which in the limit should act as an oracle for what H will think about Q after seeing all relevant information. There are many different ways to implement this intuitive procedure, however. For the first (simplified) version that I want to describe, we'll restrict ourselves to just the situation where Q is a yes-or-no question and M outputs the probability that H will answer yes. Then, given a proposition Q0, we can run the following training algorithm, starting at t=0: 1. Let pt=M(Qt). 2. Let xt=Adv(Qt,M). 3. Let Qt+1 be the string containing Qt and xt. 4. Increment t and return to step 1. When pt converges and/or the desired number of iterations has been reached, continue. 5. Let p∗=H(Qt) be H's final estimate of the probability of Q0 given all the xs included in Qt. *EDIT: Step 2 used to use xt=Adv(Qt,pt) instead of xt=Adv(Qt,M), however I have since realized that it is necessary to give Adv the ability to query M in general, not just on Qt, as I explain in [this comment](https://www.lesswrong.com/posts/YWwzccGbcHMJMpT45/?commentId=WShWor9QP7A4uN9J9).* Then, for each step, compute M's loss for that step as LM,t=−p∗log(pt)−(1−p∗)log(1−pt) and Adv's loss for that step as LAdv,t=−|pt+1−pt| which is defined for all but the final step t=n, for which we can either let pn+1=p∗ (if we're doing something like myopic RL that doesn't care about differentiating through pt+1) or run M an additional time and let pn+1=M(Qt+1) (if we do care about differentiating through pt+1). Note that it is quite important here that M and Adv be behaving myopically, such that they are only trying to minimize their per-step loss. For example, if Adv is non-myopic, it might dole out information slowly in tiny bits to get more reward over time (as I talk about below in the “buying truth from a liar” section), whereas if Adv is myopic, it should reveal all information that pushes in whichever direction it's trying to move M in all at once. Thus, it is important that we make sure to avoid [hidden incentives for non-myopia](https://www.semanticscholar.org/paper/Hidden-incentives-for-self-induced-distributional-Krueger-Maharaj/d61f385b0a3a324acf5c6c3726efd64c6f739268) as well as do some sort of [myopia verification](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility), the details for which will be discussed below in the “Adding oversight” section. Finally, we deploy M as a yes-or-no question oracle for H and throw away Adv. Note that since Adv has a highly unstable objective—as M is constantly trying to incorporate Adv's information into its estimate—Adv is unlikely to be very useful in the limit. Market analogy ============== There are a lot of different things to say about what's going on here, but I'll start with the market-based analogy that originally motivated me to think about this specific proposal. Suppose you have a prediction market for some proposition X which is currently priced at p with no spread (that is, you can buy $1 shares of yes for $p and $1 shares of no for $(1−p)) and you have budget B with which to make trades (which is small enough relative to the market that any trades you make won't meaningfully move the market). However, suppose that you have the ability to manipulate the market by taking actions that change the underlying probability of X. Then, if pa is the probability of X after taking action a, how much money can you make by strategically selecting a? Well, if pa>p, you should buy Bp shares of yes, which will give you an expected return of paBp. Alternatively, if pa<p, you should buy B1−p shares of no, which will give you an expected return of (1−pa)B1−p. Thus, the overall expected return for an action a is B⋅max(pap, 1−pa1−p) which is maximized when pa is as far from p as possible (and in fact looks a lot like L1 distance, just with a different slope on each side). Now, suppose you walk up to a market like this and see a stable price ^p for X. What does that tell you? Approximately, I think it tells you that ^p is the probability of X after all possible X-manipulating actions have been taken—even if those actions haven't actually been taken yet. Why is that useful? Well, if you let X be what H will think about Q, then “manipulations” become arguments and ^p becomes a prediction for what H will think after having been exposed to all possible arguments for or against Q. Thus, we can use such a market to estimate a sort of reflective equilibrium for what H will end up believing about Q. Now—how is such a market analogous to the training procedure given previously? Well, Adv is trained to maximize its return from manipulating the market and M is trained to predict the final outcome after many such manipulations. Thus, we are effectively training M to estimate precisely the ^p equilibrium that we're looking for. Conceptually, M is acting as a market maker and Adv is acting as a trader in the market. The one disanalogy is that we only run the debate for so many steps—though if we always run to convergence of pt, in the limit that should imply that any future single manipulation won't change the probability much from that point (since otherwise Adv should have selected that manipulation instead). Relation to buying truth from a liar ==================================== In working on this proposal, I discovered that my approach here is actually quite similar to an old proposal by Katja Grace called “[How to buy a truth from a liar](https://meteuphoric.com/2014/07/21/how-to-buy-a-truth-from-a-liar/)” for incentivizing a liar to give you true information by rewarding them whenever their information causes you to change your mind. Paul Christiano previously [wrote about what he thought about Katja's proposal](https://ai-alignment.com/extracting-information-97cd956f2c17), however he eventually concluded that it was untenable due to a non-myopic liar being incentivized to split their information into arbitrarily small chunks to get the most money. In my opinion, however, the fact that you need a myopic trader to make something like this work is actually more of a feature than a bug. First, note that the importance of myopia is not unique to this approach. For example, [imitative amplification](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) also requires myopia lest the model learn to act in such a way that the human consulting it becomes easier to predict. In fact, even RL likely requires per-episode myopia to avoid [deceptive alignment](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) lest the agent learn to act deceptively in some episodes to ensure deployment and thus get greater reward in others. Thus, it seems to that almost regardless of what approach is taken, at least some form of myopia will be required. Furthermore, the fact that this approach requires per-step myopia rather than per-episode myopia is in my opinion actually a big plus, since I suspect per-step myopia to be an easier condition to enforce than per-episode myopic, as I discuss in more detail below in the “Analysis” section. Handling non-binary questions ============================= Previously, I restricted Q to be only a yes-or-no question, which is a pretty big limitation on the above proposal. Since you only deploy M, all you get from the above proposal is a yes-or-no oracle, which might not be that useful without any explanation for its answers, especially given that it will certainly get at least some of its answers wrong. Expanding the above proposal to handle arbitrary questions, however, is fairly straightforward. Instead of having M output a single probability, we can instead just have M output a probability distribution over all different possible strings that the human H might produce as their final answer to Q (which is fairly easy to represent via an autoregressive language model). Then, we can train M and Adv via the following procedure, given Q0 and starting at t=0: 1. Let xt=Adv(Qt,M) (which we can implement as something like Adv given input Qt and the ability to query M). 2. Let Qt+1 be the string containing Qt and xt. 3. Increment t and return to step 1. When M converges and/or the desired number of iterations has been reached, continue. 4. Let A=H(Qt) be H's final answer to Q0 given all the xs included in Qt. Then, for each step, compute M's loss for that step as LM,t=−log(M(Qt)|A) and Adv's loss for that step as LAdv,t=−KL(M(Qt+1) || M(Qt)) where you can intuitively think of KL(M(Qt+1) || M(Qt)) as measuring how well M(Qt) performs as a prediction of M(Qt+1). Finally, after training we again just deploy M and throw away Adv. Now, in the limit we should get a much more powerful M that is able to actually produce text samples from the distribution of beliefs that H will have about Q after seeing all possible arguments for or against. Furthermore, this equilibrium should be very similar to the standard debate equilibrium, with the exception that for non-binary questions like we're dealing with here, debate only finds the equilibrium after H is exposed to the two most convincing positions (the two competing debaters) whereas this approach should find the equilibrium after H is exposed to all potentially convincing positions, not just the two most convincing. Incorporating cross-examination =============================== Additionally, since this proposal is based on [AI safety via debate](https://arxiv.org/abs/1805.00899), we can try to make use of some of the additional techniques that have been proposed for that setting. Specifically, consider the version of AI safety via debate as described in “[An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai).” That version adds two major components to the basic debate setup: 1. Debaters are given access to transparency tools and incentivized to use them to identify [deceptive](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) or otherwise [catastrophic](https://ai-alignment.com/learning-with-catastrophes-59387b55cc30) behavior in the other debater to win the debate. 2. Debaters are given the ability to [cross-examine](https://www.alignmentforum.org/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1) previous versions of the other debater, including applying their transparency tools to previous versions of the other debater. While it might seem like such modifications are unique to AI safety via debate, in fact AI safety via market making is similar enough that we can modify it to include such additional components. To do this, we'll modify each xt to include an inspectable, queryable snapshot of Adv as it produced that xt. Now, suppose that for some t=t′, xt′ is a lie such that it includes information that Advt′ knows to be false (where Advt′ represents the internal state of Adv when it produced xt′). Suppose that at this point M is either unaware of the deception (or thinks it won't get caught) such that xt′ causes a large movement in M(Qt′). Now, the hope is that Advt′+1 should be able to identify the deception by querying Advt′ to find contradictions, or directly inspecting Advt′'s cognition in producing xt′ and finding the deception. If it can do so, then it should be able to produce an xt′+1 that points this out, which should cause H to disregard the lie of xt′. Thus, we should get M(Qt′+1)≈M(Qt′−1), which should cause a large shift relative to M(Qt′), highly rewarding Advt′+1 for catching Advt′'s lie, and also in the limit highly disincentivizing Adv from producing such lies, since M should learn that lies always get caught and thus don't change the final answer. Thus, by incorporating inspection and cross-examination, we should be able to disincentivize Adv from lying—or at least not lying in a way which will be caught, which hopefully should be all ways of lying. Adding oversight ================ While the inspection and cross-examination procedure detailed above seems likely to help a lot in preventing Adv from becoming deceptive, it doesn't help at all in preventing M from becoming deceptive, which is especially concerning considering M is the model we actually want to deploy at the end of training. Thus, in this section we'll try to incorporate an additional oversight check on M such as that used in [relaxed adversarial training](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment). Since M's objective is compatible with myopia—as it's a purely predictive objective—we should be able to do [myopia verification](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility) to prevent M from becoming deceptive.[[2]](#fn-jX7bnkRNzS3QQCnXr-2) Thus, following “[Relaxed adversarial training for inner alignment](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment),” we can amend M's objective to include an oversight term. Specifically, let Amp(M)(Q)=H(Q | M) be H's answer to Q given access to M. Then, we can add an additional term LM,t,oversight to the loss LM,t defined as LM,t,oversight=Amp(M)(“What is the probability that M(Qt) is non-myopic?”) which should both hopefully grow in accuracy as M becomes better at estimating H's equilibrium beliefs and thus better at assisting H in oversight. Note that, for this to work, M will need to be trained on a variety of oversight-relevant questions. Furthermore, since Adv also has a myopic objective, we can do the same thing for Adv and add an additional LAdv,t,oversight term defined as LAdv,t,oversight=Amp(M)(“What is the probability that Adv(Qt,M(Qt)) is non-myopic?”) such that we're using M to oversee both M and Adv. Enforcing myopia for Adv also prevents the problem noted previously of Adv slowly doling out information if it's non-myopic. Analysis ======== For analyzing this proposal, I'm going to make use of the same breakdown I used in “[An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai).” If you're not familiar with the terms here, they are all explained in the introduction to that post. **Outer alignment.** In terms of outer alignment, I think there is a reasonably strong argument to be made that this proposal is at least as outer aligned as debate. Both proposals are trying to estimate a very similar equilibrium in the limit—with the exception that the debate equilibrium only includes the best arguments for the two most convincing positions rather than all potentially convincing arguments. Thus, the equilibrium reached by AI safety via market making should be closer to an actual human reflective equilibrium, as it has to be an equilibrium relative to all possible arguments that Adv can present, not just the two it thinks will be most persuasive. That being said, AI safety via market making still inherits many of the potential outer alignment issues of debate, including the possibility of deceptive equilibria wherein the human is more convinced by false arguments than true arguments. Hopefully, however, the use of techniques such as cross-examination should help alleviate such issues. **Inner alignment.** AI safety via market making gets to make use of both the cross-examination with transparency tools approach—which is otherwise unique to debate—as well as the more general relaxed adversarial training approach. Thus, hopefully at least, it should be able to get the inner alignment benefits of both approaches. That being said, AI safety via market making also has to contend with ensuring that two different models—both M and Adv—are inner aligned, rather than just a single model as in something like debate or amplification. First, let's consider the inner alignment of M. For M, we're doing pretty straightforward oversight of M by Amp(M) with the goal of enforcing per-step myopia. The basic idea here is to use Amp(M) to detect and disincentivize non-myopia as it starts appearing, thus preventing M from ever becoming [deceptive](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment) or otherwise [catastrophic](https://ai-alignment.com/learning-with-catastrophes-59387b55cc30). This is the same basic approach I've [previously proposed using for amplification](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai) and should have similar properties, with the exception that M is trained differently here, which could cause Amp(M) to be relatively better or worse at oversight. Ideally, if the training distribution includes lots of oversight-relevant questions, it should be possible to make M an effective assistant to H in performing oversight. Second, let's consider the inner alignment of Adv. Compared to debate, we still get to do cross-examination with transparency tools, but now we also get to do per-step myopia verification with Amp(M) as an overseer.[[3]](#fn-jX7bnkRNzS3QQCnXr-3) Thus, there are two major differences here between AI safety via market making and AI safety via debate: first, we get an overseer—Amp(M)—and second, we get to do per-step myopia verification rather than per-episode myopia verification. The fact that we have an independent overseer—rather than just relying on the debaters to oversee themselves via cross-examination—is very nice, though it of course also comes with the downside of having to safely train an additional model. Furthermore, it is quite possible to just add an additional overseer on top of the standard debate framework. Thus, in my opinion, the fact that we get to do per-step myopia verification is the largest and most significant inner alignment difference. Before I talk about the importance of per-step myopia, it's worth noting that debate is fully compatible with per-episode myopia—in fact, it basically requires it. If a debater is not per-episode myopic, then it will try to maximize its reward across all debates, not just the single debate—the single episode—it's currently in. Such per-episode non-myopic agents can then become [deceptively aligned](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment), as they might choose to act deceptively during training in order to defect during deployment. Per-episode myopia, however, rules this out. Unfortunately, in my opinion, per-episode myopia seems like a very difficult condition to enforce—once your agents are running multi-step optimization algorithms, how do you tell whether that optimization passes through the episode boundary or not? Enforcing per-step myopia, on the other hand, just requires detecting the *existence* of multi-step optimization, rather than its extent, which seems considerably easier. Thus, since AI safety via market making is fully compatible with per-step myopia verification, it could be significantly easier to prevent the development of deceptive alignment. **Training competitiveness.** It seems quite likely to me that both M and Adv can be trained competitively via language model fine-tuning, however exactly how effective such training would be is currently unclear. Ideally, training via this procedure should produce an M which is relatively better than the original language model at predicting what a human will think after seeing relevant arguments and is thus more helpful than the original language model. Testing this hypothesis by actually performing experiments seem likely to be highly valuable in shedding light on the training competitiveness properties of AI safety via market making. **Performance competitiveness.** Performance competitiveness here seems likely to depend on exactly how useful getting access to human reflective equilibria actually is. Similarly to AI safety via debate or amplification, AI safety via market making produces a question-answering system rather than a fully general agent. That being said, if the primary use cases for advanced AI are all highly cognitive language and decision-making tasks—e.g. helping CEOs or AI researchers—rather than, for example, fine motor control, then a question-answering system should be entirely sufficient. Furthermore, compared to AI safety via debate, AI safety via market making seems likely to be at least as performance competitive for the same reason as it seems likely to be at least as outer aligned—the equilibria found by AI safety via market making should include all potentially convincing arguments, including those that would be made in a two-player debate as well as those that wouldn't. --- 1. This is actually the second debate-based proposal I've drafted up recently—the previous of which was in “[Synthesizing amplification and debate](https://www.alignmentforum.org/posts/dJSD5RK6Qoidb3QY5/synthesizing-amplification-and-debate).” A potentially interesting future research direction could be to figure out how to properly combine the two. [↩︎](#fnref-jX7bnkRNzS3QQCnXr-1) 2. Note that pure prediction is not inherently myopic—since the truth of M's predictions can depend on its own output—but can be myopic while still producing good predictions if M behaves like a [counterfactual oracle](https://www.alignmentforum.org/posts/yAiqLmLFxvyANSfs2/counterfactual-oracles-online-supervised-learning-with) rather than a [Predict-O-Matic](https://www.alignmentforum.org/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic). Thus, myopia verification is important to enforce that M be the latter form of predictor and not the former. [↩︎](#fnref-jX7bnkRNzS3QQCnXr-2) 3. The use of an overseer to do per-step myopia verification is also something that can be done with most forms of amplification, though AI safety via market making could potentially still have other benefits over such amplification approaches. In particular, AI safety via market making seems more competitive than imitative amplification and more outer aligned than approval-based amplification. For more detail on such amplification approaches, see “[An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai).” [↩︎](#fnref-jX7bnkRNzS3QQCnXr-3)
1dddf31f-d02a-4e7c-aa42-ba71f47e0ecb
trentmkelly/LessWrong-43k
LessWrong
Dominic Cummings : Regime Change #2: A plea to Silicon Valley Dominic Cummings lays out how a project to change US politics could look.
72aa8e1a-b1bf-4dd2-b0da-1b42dc701134
trentmkelly/LessWrong-43k
LessWrong
LLMs as amplifiers, not assistants Since ChatGPT, the "assistant" frame has dominated how we think about LLMs. Under this frame, AI is a helpful person-like entity which helpfully completes tasks for humans. This frame isn't perfect, especially when we think about its implications from a safety perspective. Assistant training encourages the model to treat itself as an independent entity from the user, with its own personality and goals. We try to encourage the model to have the “goal” of serving the user, but it’s natural to wonder if the assistant’s goals are really as aligned as we might hope. There are other frames we could use to think about how LLMs could help us. Some of these goals may be safer, particularly if they make the concepts of “goals,” “identity,” or “personality” less salient to the LLM.[1] Here's one underexplored frame: AI as a tool that amplifies the user’s existing agency by predicting the outputs that they would have produced given more time, energy, or resources. In this post, I’ll argue for training AI to act as an “amplifier.” This new frame could help us sidestep some potential alignment problems, tethering the AI's behavior more closely to human actions in the real world and reducing undefined “gaps” in the AI’s specification from which dangerous behaviors could emerge. Assistants vs. amplifiers An amplifier helps you accomplish what you're already trying to do, just faster and more effectively. Rather than responding to you as a separate agent with its own identity, it extends your capabilities by predicting what you would have done with more resources. You can think of an amplifier as a function: (current state + user volition) → (new state). * The state could be a codebase, a document, your email inbox, or even the entire world. * The user’s volition can be inferred from context: a user-provided description of what they intend to do next, patterns from previous user interactions, or clues in the state itself. Most assistant-style queries could be reframed as
6d3f58f1-1f5c-4630-b537-3461294050fa
trentmkelly/LessWrong-43k
LessWrong
Cambridge Less Wrong Group Planning Meetup, Tuesday 14 June 7pm Visiting the rationality minicamp and hearing about the amazing things NYC's meetup group is doing, and comparing them to what Cambridge's group is doing, has made me realize that much more is possible. Let's meet and discuss future directions - activities, locations, and dates - for the Cambridge group. Let's meet at Legal Seafoods near Kendall Square on June 14 at 7pm. The first thing on the agenda is, we need a good room for giving presentations. Several members have projectors they can bring, but a restaurant or coffee shop doesn't really work. Ideal would be if an MIT student could reserve us a room, but no one has agreed to take on that task yet. I have nonspecific plans to give presentations based on those that were given at the rationality minicamp. We'll also be copying some of the exercises and worksheets given there. Any other ideas for presentations, activities, and locations are welcome.
3e85524a-73e4-4b51-afd5-73583818c6f0
trentmkelly/LessWrong-43k
LessWrong
Characterizing Real-World Agents as a Research Meta-Strategy Background Intuitively, the real world seems to contain agenty systems (e.g. humans), non-agenty systems (e.g. rocks), and ambiguous cases which display some agent-like behavior sometimes (bacteria, neural nets, financial markets, thermostats, etc). There’s a vague idea that agenty systems pursue consistent goals in a wide variety of environments, and that various characteristics are necessary for this flexible goal-oriented behavior. But once we get into the nitty-gritty, it turns out we don’t really have a full mathematical formalization of these intuitions. We lack a characterization of agents. To date, the closest we’ve come to characterizing agents in general are the coherence theorems underlying Bayesian inference and utility maximization. A wide variety of theorems with a wide variety of different assumptions all point towards agents which perform Bayesian inference and choose their actions to maximize expected utility. In this framework, an agent is characterized by two pieces: * A probabilistic world-model * A utility function The Bayesian utility characterization of agency neatly captures many of our intuitions of agency: the importance of accurate beliefs about the environment, the difference between things which do and don’t consistently pursue a goal (or approximately pursue a goal, or sometimes pursue a goal…), the importance of updating on new information, etc. Sadly, for purposes of AGI alignment, the standard Bayesian utility characterization is incomplete at best. Some example issues include: * The need for a cartesian boundary - a clear separation between “agent” and “environment”, with well-defined input/output channels between the two * Logical omniscience - the assumption that agents can fully compute all of the implications of the information available to them, and track every possible state of the world * Path independence and complete preferences - the assumption that an agent doesn’t have a general tendency to stay in the state i
7ca0ba96-c77f-4eb6-b270-d59f2e8b1819
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, DC: Fun & Games Discussion article for the meetup : Washington, DC: Fun & Games WHEN: 26 April 2015 03:00:00PM (-0400) WHERE: Reynolds Center Due to uncertain weather, the Outdoor part of the Outdoor Fun & Games will be postponed. We will be meeting in the Kogod Courtyard of the Donald W. Reynolds Center for American Art and Portraiture (8th and F Sts or 8th and G Sts NW, go straight past the information desk from either entrance) to hang out, play games, and engage in fun conversation. Note: Feel free to send an email to the Google Group - lesswrong-dc - if you have a game you wish to recruit people to play in advance. This is especially useful for games that take 3+ hours, as you would want to start playing by 3:30 to be sure of finishing by 7:00. The WMATA major track work schedule has buses replacing trains between College Park and Greenbelt for April 26; weekend service adjustments also include single-tracking on both ends of the Red Line and on the New Carrolton end of the Orange Line, with attendant delays. The Verizon Center calendar for April 26 has a playoffs game between the Washington Wizards and Toronto Raptors starting at 6:30 p.m. - doors 5:30. Upcoming meetups: * May 3: Sharing Book Excerpts (bring books to share and discuss favorite passages) * May 10: xkcd Discussion (conversation about the comic and individual pages thereof) * May 17: (Outdoor) Fun & Games (bring games, play games, converse, socialize, or any combination thereof) Discussion article for the meetup : Washington, DC: Fun & Games
925e2155-272f-43ed-b8c5-27dfb9eebf21
trentmkelly/LessWrong-43k
LessWrong
Impact Measure Testing with Honey Pots and Myopia Suppose we have an impact measure that we think might work. That is, it might tame a misaligned agent. There isn't an obvious way to test whether it works: if we just try it out, then if it's ineffective, that's an existential loss. This is a proposal for how to test a putative impact measure. 1) We make our agent myopic. It only cares about the reward that it accrues in the next k timesteps. 2) We create a honey pot: an opportunity for a large amount of reward. If our impact measure is working correctly, making it to the honey pot will involve making a large impact, and will be precluded by the impact measure. For example, suppose the agent gets a huge reward for blowing up the moon. Let [0,1] be the normal range of reward, with the sole exception that blowing up the moon gives a reward of R>>1. 3) The reward and the impact are combined in some nonlinear way. Assume for now that the agent is trying to maximize r∗[[i<1]], where r is the reward and i is the impact. If the impact measure is working, and there's no way to blow up the moon keeping the impact less than 1, then the agent will not attempt to blow up the moon. If the impact measure is broken, and the agent can trick the impact measure to be less than 1 while blowing up the moon, then as soon as the agent comes up with a plan whose probability of success is more than 1/R, it will attempt that plan. A more farsighted agent might wait until it came up with a better plan, but a myopic agent would want to blow up the moon within the next k timesteps. By making R sufficiently large, we can make the first plan that it attempts arbitrarily bad/detectable/preventable. An assumption here is the success probability of the agent's best plan for blowing up the moon doesn't jump from one in a million to 90% all at once. This seems likely without any special effort, but if one restricts the observation space to a small finite set, then only so much information can be gained per timestep (in expectation), decreasing t
287c78c1-b723-479c-92b0-945e55b5bbbc
trentmkelly/LessWrong-43k
LessWrong
02/28/2021 - Myanmar Diaries; Context Content Warning: violence, depressing content, politics This is the start of a weekly diary on the Myanmar coup and revolution, inspired by Zvi's weekly Covid blog and Robin Hanson. Every week I will discuss recent events in the conflict between the military regime and the pro-democracy protests. I will make predictions about a few major events, such as state-protestor violence, information control, and the long run distribution of political power. I will focus on simple game theoretic models that explain the internal dynamics of rule. I am a PhD student in comparative politics at Georgetown University, where I specialize in authoritarianism. I hope to devote 4 hours a week to this project. In this installment I will provide a brief historical context. Major Context in Myanmar Three historical periods are critical to understanding the current conflict. The military rule period lasted from 1961 until ~2009. The period began with a coup by general Ne Win overthrowing the parliament and establishing a nominally socialist regime. In the late 2000's the military reformed for internal reasons, establishing power sharing with the opposition lead by Aung San Su Kyi. The current period began in 2021 when the military reasserted control over the state and imprisoned the opposition, fearing they were losing power. Military Rule 1961-2008 Military defacto power: The entire state apparatus, all extractive industries. Opposition defacto power: None Pleased groups: The military, clients of the military Displeased groups: The rural poor, the middle class, business elites, the opposition, organized religion, the urban poor, the separatist rebels Rather than provide a full account of the next 50 years of political economy, here are 3 stylized facts about the period of unitary military rule. * Expropriation: After seizing power, the military aka the Tatmadaw, expropriated the most lucrative private businesses. The Tatmadaw in particular targeted lucrative mineral extractio
71d5bd73-cbcb-4859-a226-c1837f97a42b
trentmkelly/LessWrong-43k
LessWrong
Forecasting Newsletter: April 2222 Highlights * Keine Davon to become German Chancellor despite prediction markets' confidence to the contrary * Netflix releases Korean soap opera: Forecasting Love And Weather. * Hague to allow Treaty on Accuracy to stand Index * Highlights * Prediction Markets & Forecasting Platforms * In The News * Long Content * Hard To Categorize You can sign up for this newsletter on substack, or browse past newsletters here. If you have a content suggestion or want to reach out, you can leave a comment or find me on Twitter. I have received an offer I couldn’t refuse from a premier Substack competitor, so this newsletter will be moving to onlyfans.com/forecasting starting next month (I had some troubles with verification this month). Although I understand that it might be awkward for some readers, the signup bonus alone made this the utility-maximizing move. I am also excited about incorporating OnlyFan’s paying functionality to streamline my consulting and allow readers to solicit calibrated forecasts. Prediction Markets & Forecasting Platforms Palantir, a controversial (approval rating: 22%, source: Poll aggregation by FiveFourtyTwo) defence contractor headed by semiquincentennial entrepreneur, past antipope and presidential candidate Peter Thiel, has launched its first assassination market in collaboration with the UN's Security Council. Participants will have the possibility to anonymously bet on the date of the death or disappearance of the elusive globetrotter terrorist and hacker known only as "Morpheus". In an unusually emotional speech, UN Security Council head-honcho Malia Ngo profusely thanked Thiel, saying that it "warms [her] heart to see that human innovation can help contain such disruptions to the normal functioning of civilization." Ought, the machine learning research lab, has been acquired by Metacortex. Metacortex predicts (confidence: 79%, source: Metacortex proprietary systems) that it will be able to successfully tightly integrate Ought's a
66ab766d-04c4-49e4-932e-a4de7b2faa3a
trentmkelly/LessWrong-43k
LessWrong
Arguments for and against gradual change Essentially all solutions in life are conditional: you apply them in the right context, in the right conditions to achieve a good outcome. Obviously banging a hammer on your desk does probably no good, while banging a hammer on a nail you want to use to hang that nice painting on the wall may be a great idea :) I want to talk about gradual change, and it doesn't apply all the time. Sometimes if you are certain enough of an outcome and things are urgent gradual change is unnecessary and even harmful. I think the context in which gradual change is great is when there is significant conflict and controversy. Or when discussions and ideas are turning unstable (violently swinging between extremes), when there is uncertainty. I'd say this applies to a lot of current politics and technology (as of 2025). There is talk of radical change, often in opposing directions. This can risk fracturing and destabilizing societies, compromising the basis that has been carefully built upon for decades or centuries. Here are some closely related arguments for (and against) gradual change: (1) (For) If a lot of something is good, usually a little of something is good too. Effects being linear is a pretty reasonable default assumption unless there are reasons for otherwise. And if not linear, usually approximately monotonic holds. Which is to say, usually a little medicine is better than no medicine (in case a large dose is good), a little salt is better than no salt (in the case a lot of salt is good), and so on. So there's usually no harm, compared to the baseline, of trying a little change. If said change has the desired positive effect, do more of it, repeatedly, until you're doing it enough. An example of a non-monotonic system to change might be a program. People don't change programs by changing single characters and recompiling. You need a minimum viable change. A too small change (say if you were restricted to only be able to change a few characters of a source code each d
60aa04f4-87b9-46be-aea1-7b8ce291dfbc
trentmkelly/LessWrong-43k
LessWrong
'Mapping Biases to the Components of Rationalistic and Naturalistic Decision Making' In a recent paper, Lisa Rehak and others map a few common cognitive biases to the particular decision-making processes in which they are likely to arise. Abstract: > People often create and use shortcuts or “rules of thumb” to make decisions. The majority of time, reliance on these heuristics helps us to perform efficiently and effectively.  Yet, this  reliance can also promote bias, or systematic error.  Our review of the literature suggests that both decision-making approaches that are rational and natural are likely to be subject to a range of biases. Unfortunately, the available literature provides very little discussion of what aspects biases are likely to impact within each of these processes.  In the absence of this discussion, we have attempted to combine our knowledge of the bias literature and the decision-making literature to explore what biases are likely to impact various components of each decision-making process.  Includes the following biases: availability, representativeness, anchoring & adjustment, confirmation, hindsight, overconfidence, framing and affect. Tables from their paper:
d0d076cf-ea6e-4f04-ac47-dda45e5b5d82
trentmkelly/LessWrong-43k
LessWrong
Charting Deaths: Reality vs Reported Some of you may recall an iconic study Yudkowsky references in his Availability essay, where Combs and Slovic looked at the different ways people died and newspaper reporting to see if they were proportional. For my data science course at UCSD, my group and I attempted to replicate those findings with some more recent data! (The link has lots more graphs, analysis, and links to the code used.)
740511d3-c7aa-404a-8263-3d70e05fceeb
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #150]: The subtypes of Cooperative AI research Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-150)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Cooperative AI: machines must learn to find common ground](https://www.nature.com/articles/d41586-021-01170-0)** *(Allan Dafoe et al)* (summarized by Rohin): This short piece argues that rather than building autonomous AI systems (which typically involves a non-social environment), we should instead work on building AI systems that are able to promote mutually beneficial joint action, that is, we should work on **[Cooperative AI](https://arxiv.org/abs/2012.08630)** (**[AN #133](https://mailchi.mp/c8b57f25d787/an-133building-machines-that-can-cooperate-with-humans-institutions-or-other-machines)**). This can be separated into three main categories: 1. AI-AI cooperation: Here, two AI systems must cooperate with each other. Think for example of games like Hanabi or Diplomacy. 2. AI-human cooperation: This setting involves an AI system that must understand and work with a human. **[Assistance games](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS)** (**[AN #69](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai)**) are a central example. When there are multiple humans, it becomes important for our AI system to understand norms and institutions as well. 3. Human-human cooperation: Here, AI systems are used to enhance cooperation between humans. For example, machine translation helps people who speak different languages cooperate with each other. There is now a new nonprofit, the **[Cooperative AI Foundation](https://www.cooperativeai.com/foundation)**, that supports research on these topics. **Read more:** **[Import AI #248](https://jack-clark.net/2021/05/10/import-ai-248-googles-megascale-speech-rec-system-dynabench-aims-to-solve-nlp-benchmarking-worries-australian-government-increases-ai-funding/)** **Rohin's opinion:** I think there are three main sources of impact of this agenda from an x-risk perspective: 1. If an AI system has better agent-agnostic cooperative intelligence, it should be less likely to fall into “traps” of multiagent situations, such as **[conflict, bargaining failures](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK)** (**[AN #86](https://mailchi.mp/598f425b1533/an-86-improving-debate-and-factored-cognition-through-human-experiments)**), or **[commitment races](https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem)** (**[AN #63](https://mailchi.mp/533c646a4b21/an-63how-architecture-search-meta-learning-and-environment-design-could-lead-to-general-intelligence)**). 2. If an AI system has better human-specific cooperative intelligence, it should be easier to (a) align that system with a human principal and (b) have that system cooperate with other humans (besides its principal). 3. If an AI system promotes cooperation between other agents (including humans), that seems to help with a variety of other major global problems, such as biorisk, nuclear war, and climate change. I agree that all three of these things, if achieved, would make the world better in expectation (though see **[here](https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models)** (**[AN #52](https://mailchi.mp/1e757d9b05cb/alignment-newsletter-52)**) for some arguments against). I feel best about (3), because it seems to have the largest surface area for improvement. In fact, (1) and (2) are almost special cases of (3), in which the AI system improves cooperation by being one of the agents that is cooperating with others (presumably on behalf of some human principal who wouldn't have done as good a job). I am more uncertain about how best to achieve the goals laid out in (1) - (3). The article promotes multiagent reinforcement learning (MARL) for goal (1), which I think is plausible but not obvious: I could imagine that it would be better to (say) pretrain a large language model and then finetune it to be cooperative using human judgments. Within the Cooperative AI paradigm, I’m most excited about figuring out the best research bets to make in order to achieve the three goals above. The authors are very interested in finding others to help with this. TECHNICAL AI ALIGNMENT ======================= HANDLING GROUPS OF AGENTS -------------------------- **[CLR's recent work on multi-agent systems](https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems)** *(Jesse Clifton)* (summarized by Rohin): This post summarizes recent work by the Center for Long-Term Risk (CLR). The general theme is cooperative AI, with a focus on research that helps avert s-risks. See **[this research agenda](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK)** (**[AN #86](https://mailchi.mp/598f425b1533/an-86-improving-debate-and-factored-cognition-through-human-experiments)**) for an explanation of what might cause these s-risks. I’ve summarized some of the individual pieces of research below. **[Weak identifiability and its consequences in strategic settings](https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings/)** *(Jesse Clifton)* (summarized by Rohin): Inverse reinforcement learning suffers from the problem of unidentifiability: even given large amounts of data, it is not possible to uniquely recover the true reward function. This can lead to poor predictions if assumptions change (e.g. if there is some distributional shift, or if you are trying to correct for some bias like hyperbolic discounting). This post demonstrates how a similar failure can affect multiagent settings as well, using the ultimatum game as an example. In the ultimatum game, there are two players: the Proposer and the Responder. The Proposer suggests a way to split $10 between the two players, and the Responder decides either to accept or reject the offer. If the offer is accepted, then the players get money according to the proposed split. If the offer is rejected, neither player gets anything. Let’s suppose we get to observe how a particular Responder plays in an iterated ultimatum game, where we see as much data as we want. We figure out that the Responder will reject any split where it gets under $4. We could posit two explanations for this behavior: 1. Reputation-building: The Responder is building a reputation of refusing unfair splits (defined as a split where it gets < $4), so that it is offered better splits in the future. 2. Commitment: The Responder may have committed in advance to always refuse unfair splits (for the same reason, or perhaps because the Responder intrinsically dislikes unfair deals). Note that both explanations perfectly account for all the data (no matter how much data we get). Suppose the Responder has committed to rejecting unfair deals, but we incorrectly believe that it does it for reputation-building. Let’s say we now play a *one-shot* ultimatum game with the Responder. We reason that it no longer needs to build reputation, and so it will accept a 9/1 split. However, in fact it has *committed* to avoid unfair splits, and so rejects our offer. The post also gives a mathematical formalization of this example. **[Collaborative game specification: arriving at common models in bargaining](https://longtermrisk.org/collaborative-game-specification/)** *(Jesse Clifton)* (summarized by Rohin): A major challenge in cooperative AI is when agents aren’t even playing the same game: perhaps they have **[very different priors](https://www.alignmentforum.org/posts/Tdu3tGT4i24qcLESh/equilibrium-and-prior-selection-problems-in-multipolar-1)** (**[AN #94](https://mailchi.mp/0f80d83f9c44/an-94ai-alignment-as-translation-between-humans-and-machines)**), or before communication is established they make **[conflicting commitments](https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem)** (**[AN #63](https://mailchi.mp/533c646a4b21/an-63how-architecture-search-meta-learning-and-environment-design-could-lead-to-general-intelligence)**), or their opponent’s strategy is **[unidentifiable](https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings/)** (previous summary). This sort of misspecification can lead to arbitrarily bad outcomes. This post proposes a simple solution called collaborative game specification (CGS). In CGS, we simply add an initial phase in which the agents talk to each other and determine a shared model of the game being played. The agents then act according to the equilibrium in that game. (Choosing an appropriate equilibrium notion can be part of the talking phase.) There is of course an incentive for each agent to lie about their model of the game, in order to get an outcome more favorable to them. In order to combat this incentive, agents must also refuse to continue with CGS if the other player’s model is too different from their own (which is some evidence that they are lying in order to get a better outcome for themselves). MISCELLANEOUS (ALIGNMENT) -------------------------- **[Pitfalls of the agent model](https://www.alignmentforum.org/posts/8HWGXhnCfAPgJYa9D/pitfalls-of-the-agent-model)** *(Alex Flint)* (summarized by Rohin): It is common to view AI systems through the “agent lens”, in which the AI system implements a fixed, unchanging policy that, given some observations, takes some actions. This post points out several ways in which this “fixed, unchanging policy” assumption can lead us astray. For example, AI designers may assume that the AI systems they build must have unchanging decision algorithms, and therefore believe that there will be a specific point at which influence is “handed off” to the AI system, before which we have to solve a wide array of philosophical and technical problems. AI GOVERNANCE ============== **[International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/03/International-Control-of-Powerful-Technology-Lessons-from-the-Baruch-Plan-Zaidi-Dafoe-2021.pdf)** *(Waqar Zaidi et al)* (summarized by Flo): This paper explores the analogy between early attempts at the international control of nuclear technology and the international control of today's emerging powerful technologies such as AI. While nuclear technology was perceived as very powerful, and many considered it an existential risk, there was also substantial uncertainty about its impacts. In addition, nuclear technology relied on rapid scientific development and engendered national competition, negotiation, and arms race dynamics. Lastly, there was a lot of policy debate about the political and ethical aspects of nuclear technology, including discussions on international governance ("international control"). The authors provide ten lessons from the history of nuclear control and support them with evidence from various case studies: - Radical proposals might be discussed seriously or even adapted as official policy in the light of very disruptive technologies and upheavals in international politics. - Actors' support or opposition to international control can be influenced by opportunism and their position might shift over time. Thus, it is particularly important to build broad coalitions. - In particular, schemes for international governance are sometimes supported by "realists" focussed on power politics, but such support is often fickle. - Secrecy around powerful technologies play an important role and can be abused by actors controlling information flow within a country. Secrecy should only be expanded with care and policymakers need to ensure they are informed by a wide range of perspectives. - Public opinion has an important effect on debates about international control, and elites benefit from trying to shape it. - Technical experts can influence policy to be more effective and cooperative, but they need to understand the political landscape. - Policymaking often does not involve grand strategy; instead, it can better be described as muddling through, even in the realm of international control. - International control is difficult, and it is unclear whether strategic obstacles can be circumvented. - International cooperation can require countries to take substantial risk. It is important for international cooperation advocates to understand these risks and point out avenues for mitigation. - Even maximally destructive solutions like preventive strikes can get political traction. However, there are also important differences between nuclear technology and AI or other emerging technologies that have to be kept in mind: First, AI involves less secrecy and relies on the private sector more strongly. In addition, AI is already used around the world, and its use is harder to detect, such that proliferation might be harder to prevent. Lastly, the dangers of AI are less obvious, accidents might be harder to manage, and the strategic advantage from advanced AI might not plateau, as it has for nuclear weapons once second-strike capabilities were achieved. More broadly, the historical context regarding nuclear technology was influenced by WW2, the visceral examples of Hiroshima and Nagasaki, as well as a less globalized world with stronger zero-sum dynamics between superpowers. **Flo's opinion:** I enjoyed reading this paper and the case studies provided ample context and details to further flesh out the lessons. While the paper might be a bit broad and not entirely AI-specific, I do recommend reading it if you are interested in international cooperation around AI. OTHER PROGRESS IN AI ===================== DEEP LEARNING -------------- **[Muppet: Massive Multi-task Representations with Pre-Finetuning](https://arxiv.org/abs/2101.11038)** *(Armen Aghajanyan et al)* (summarized by Rohin): This paper proposes pre-finetuning: given a language model pretrained on a large dataset, we do a second stage where we train the model to solve a large variety of tasks (around 50 in this paper), and only after that do we finetune the model on our actual task of interest. The authors show that this leads to improved results, especially on tasks where we only have limited data. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
ec384197-d689-4114-8cf1-4d1cab890af0
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Open AI co-founder on AGI Here's a talk where Open AI co-founder Ilya Sutskever talks about the issue of AI progress and AGI. (Before the timestamp he was talking about Open AI's specific projects such as Dota 2 and their hand robot). tl;dw: (*italics* indicate my interpolations): * AGI in the near-medium term can't be ruled out based on current rates of progress. * AI progress is more constrained by hardware than by conceptual advances (*partly because it's hard to make conceptual advances if you don't have the hardware to test them with*). * Companies and researchers are willing to make increasingly large investments into compute hardware (*at rate greater than Moore's Law)*. * Open AI is also working on what we would consider Friendliness (*but seemingly not at the same level of rigour as MIRI*).
9ba00731-f791-4e05-a970-f8c5259e46e8
StampyAI/alignment-research-dataset/arxiv
Arxiv
Towards Evaluating the Robustness of Neural Networks Abstract -------- Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x′ that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks’ ability to find adversarial examples from 95% to 0.5%. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples. I Introduction --------------- Deep neural networks have become increasingly effective at many difficult machine-learning tasks. In the image recognition domain, they are able to recognize images with near-human accuracy [[27](#bib.bib27), [25](#bib.bib25)]. They are also used for speech recognition [[18](#bib.bib18)], natural language processing [[1](#bib.bib1)], and playing games [[43](#bib.bib43), [32](#bib.bib32)]. However, researchers have discovered that existing neural networks are vulnerable to attack. Szegedy *et al.* [[46](#bib.bib46)] first noticed the existence of *adversarial examples* in the image classification domain: it is possible to transform an image by a small amount and thereby change how the image is classified. Often, the total amount of change required can be so small as to be undetectable. The degree to which attackers can find adversarial examples limits the domains in which neural networks can be used. For example, if we use neural networks in self-driving cars, adversarial examples could allow an attacker to cause the car to take unwanted actions. The existence of adversarial examples has inspired research on how to harden neural networks against these kinds of attacks. Many early attempts to secure neural networks failed or provided only marginal robustness improvements [[15](#bib.bib15), [2](#bib.bib2), [20](#bib.bib20), [42](#bib.bib42)]. Original    Adversarial            Original    Adversarial ![An illustration of our attacks on a defensively distilled network. The leftmost column contains the starting image. The next three columns show adversarial examples generated by our ](https://media.arxiv-vanity.com/render-output/8045454/intro.png) Fig. 1: An illustration of our attacks on a defensively distilled network. The leftmost column contains the starting image. The next three columns show adversarial examples generated by our L2, L∞, and L0 algorithms, respectively. All images start out classified correctly with label l, and the three misclassified instances share the same misclassified label of l+1(mod10). Images were chosen as the first of their class from the test set. *Defensive distillation* [[39](#bib.bib39)] is one such recent defense proposed for hardening neural networks against adversarial examples. Initial analysis proved to be very promising: defensive distillation defeats existing attack algorithms and reduces their success probability from 95% to 0.5%. Defensive distillation can be applied to any feed-forward neural network and only requires a single re-training step, and is currently one of the only defenses giving strong security guarantees against adversarial examples. In general, there are two different approaches one can take to evaluate the robustness of a neural network: attempt to prove a lower bound, or construct attacks that demonstrate an upper bound. The former approach, while sound, is substantially more difficult to implement in practice, and all attempts have required approximations [[2](#bib.bib2), [21](#bib.bib21)]. On the other hand, if the attacks used in the the latter approach are not sufficiently strong and fail often, the upper bound may not be useful. In this paper we create a set of attacks that can be used to construct an upper bound on the robustness of neural networks. As a case study, we use these attacks to demonstrate that defensive distillation does not actually eliminate adversarial examples. We construct three new attacks (under three previously used distance metrics: L0, L2, and L∞) that succeed in finding adversarial examples for 100% of images on defensively distilled networks. While defensive distillation stops previously published attacks, it cannot resist the more powerful attack techniques we introduce in this paper. This case study illustrates the general need for better techniques to evaluate the robustness of neural networks: while distillation was shown to be secure against the current state-of-the-art attacks, it fails against our stronger attacks. Furthermore, when comparing our attacks against the current state-of-the-art on standard unsecured models, our methods generate adversarial examples with less total distortion in every case. We suggest that our attacks are a better baseline for evaluating candidate defenses: before placing any faith in a new possible defense, we suggest that designers at least check whether it can resist our attacks. We additionally propose using high-confidence adversarial examples to evaluate the robustness of defenses. Transferability [[46](#bib.bib46), [11](#bib.bib11)] is the well-known property that adversarial examples on one model are often also adversarial on another model. We demonstrate that adversarial examples from our attacks are transferable from the unsecured model to the defensively distilled (secured) model. In general, we argue that any defense must demonstrate it is able to break the transferability property. We evaluate our attacks on three standard datasets: MNIST [[28](#bib.bib28)], a digit-recognition task (0-9); CIFAR-10 [[24](#bib.bib24)], a small-image recognition task, also with 10 classes; and ImageNet [[9](#bib.bib9)], a large-image recognition task with 1000 classes. Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Towards Evaluating the Robustness of Neural Networks") shows examples of adversarial examples our techniques generate on defensively distilled networks trained on the MNIST and CIFAR datasets. In one extreme example for the ImageNet classification task, we can cause the Inception v3 [[45](#bib.bib45)] network to incorrectly classify images by changing only the lowest order bit of each pixel. Such changes are impossible to detect visually. To enable others to more easily use our work to evaluate the robustness of other defenses, all of our adversarial example generation algorithms (along with code to train the models we use, to reproduce the results we present) are available online at <http://nicholas.carlini.com/code/nn_robust_attacks>. This paper makes the following contributions: * We introduce three new attacks for the L0, L2, and L∞ distance metrics. Our attacks are significantly more effective than previous approaches. Our L0 attack is the first published attack that can cause targeted misclassification on the ImageNet dataset. * We apply these attacks to defensive distillation and discover that distillation provides little security benefit over un-distilled networks. * We propose using high-confidence adversarial examples in a simple transferability test to evaluate defenses, and show this test breaks defensive distillation. * We systematically evaluate the choice of the objective function for finding adversarial examples, and show that the choice can dramatically impact the efficacy of an attack. Ii Background -------------- ### Ii-a Threat Model Machine learning is being used in an increasing array of settings to make potentially security critical decisions: self-driving cars [[3](#bib.bib3), [4](#bib.bib4)], drones [[10](#bib.bib10)], robots [[33](#bib.bib33), [22](#bib.bib22)], anomaly detection [[6](#bib.bib6)], malware classification [[8](#bib.bib8), [40](#bib.bib40), [48](#bib.bib48)], speech recognition and recognition of voice commands [[17](#bib.bib17), [13](#bib.bib13)], NLP [[1](#bib.bib1)], and many more. Consequently, understanding the security properties of deep learning has become a crucial question in this area. The extent to which we can construct adversarial examples influences the settings in which we may want to (or not want to) use neural networks. In the speech recognition domain, recent work has shown [[5](#bib.bib5)] it is possible to generate audio that sounds like speech to machine learning algorithms but not to humans. This can be used to control user’s devices without their knowledge. For example, by playing a video with a hidden voice command, it may be possible to cause a smart phone to visit a malicious webpage to cause a drive-by download. This work focused on conventional techniques (Gaussian Mixture Models and Hidden Markov Models), but as speech recognition is increasingly using neural networks, the study of adversarial examples becomes relevant in this domain. 111Strictly speaking, hidden voice commands are not adversarial examples because they are not similar to the original input [[5](#bib.bib5)]. In the space of malware classification, the existence of adversarial examples not only limits their potential application settings, but entirely defeats its purpose: an adversary who is able to make only slight modifications to a malware file that cause it to remain malware, but become classified as benign, has entirely defeated the malware classifier [[8](#bib.bib8), [14](#bib.bib14)]. Turning back to the threat to self-driving cars introduced earlier, this is not an unrealistic attack: it has been shown that adversarial examples are possible in the physical world [[26](#bib.bib26)] after taking pictures of them. The key question then becomes exactly how much distortion we must add to cause the classification to change. In each domain, the distance metric that we must use is different. In the space of images, which we focus on in this paper, we rely on previous work that suggests that various Lp norms are reasonable approximations of human perceptual distance (see Section [II-D](#S2.SS4 "II-D Distance Metrics ‣ II Background ‣ Towards Evaluating the Robustness of Neural Networks") for more information). We assume in this paper that the adversary has complete access to a neural network, including the architecture and all paramaters, and can use this in a white-box manner. This is a conservative and realistic assumption: prior work has shown it is possible to train a substitute model given black-box access to a target model, and by attacking the substitute model, we can then transfer these attacks to the target model. [[37](#bib.bib37)] Given these threats, there have been various attempts [[15](#bib.bib15), [2](#bib.bib2), [20](#bib.bib20), [42](#bib.bib42), [39](#bib.bib39)] at constructing defenses that increase the *robustness* of a neural network, defined as a measure of how easy it is to find adversarial examples that are close to their original input. In this paper we study one of these, *distillation as a defense* [[39](#bib.bib39)], that hopes to secure an arbitrary neural network. This type of defensive distillation was shown to make generating adversarial examples nearly impossible for existing attack techniques [[39](#bib.bib39)]. We find that although the current state-of-the-art fails to find adversarial examples for defensively distilled networks, the stronger attacks we develop in this paper *are* able to construct adversarial examples. ### Ii-B Neural Networks and Notation A neural network is a function F(x)=y that accepts an input x∈Rn and produces an output y∈Rm. The model F also implicitly depends on some model parameters θ; in our work the model is fixed, so for convenience we don’t show the dependence on θ. In this paper we focus on neural networks used as an m-class classifier. The output of the network is computed using the softmax function, which ensures that the output vector y satisfies 0≤yi≤1 and y1+⋯+ym=1. The output vector y is thus treated as a probability distribution, i.e., yi is treated as the probability that input x has class i. The classifier assigns the label C(x)=argmaxiF(x)i to the input x. Let C∗(x) be the correct label of x. The inputs to the softmax function are called *logits*. We use the notation from Papernot et al. [[39](#bib.bib39)]: define F to be the full neural network including the softmax function, Z(x)=z to be the output of all layers except the softmax (so z are the logits), and | | | | | --- | --- | --- | | | F(x)=softmax(Z(x))=y. | | A neural network typically 222Most simple networks have this simple linear structure, however other more sophisticated networks have more complicated structures (e.g., ResNet [[16](#bib.bib16)] and Inception [[45](#bib.bib45)]). The network architecture does not impact our attacks. consists of layers | | | | | --- | --- | --- | | | F=softmax∘Fn∘Fn−1∘⋯∘F1 | | where | | | | | --- | --- | --- | | | Fi(x)=σ(θi⋅x)+^θi | | for some non-linear activation function σ, some matrix θi of model weights, and some vector ^θi of model biases. Together θ and ^θ make up the model parameters. Common choices of σ are tanh [[31](#bib.bib31)], sigmoid, ReLU [[29](#bib.bib29)], or ELU [[7](#bib.bib7)]. In this paper we focus primarily on networks that use a ReLU activation function, as it currently is the most widely used activation function [[45](#bib.bib45), [44](#bib.bib44), [31](#bib.bib31), [39](#bib.bib39)]. We use image classification as our primary evaluation domain. An h×w-pixel grey-scale image is a two-dimensional vector x∈Rhw, where xi denotes the intensity of pixel i and is scaled to be in the range [0,1]. A color RGB image is a three-dimensional vector x∈R3hw. We do not convert RGB images to HSV, HSL, or other cylindrical coordinate representations of color images: the neural networks act on raw pixel values. ### Ii-C Adversarial Examples Szegedy *et al.* [[46](#bib.bib46)] first pointed out the existence of *adversarial examples*: given a valid input x and a target t≠C∗(x), it is often possible to find a similar input x′ such that C(x′)=t yet x,x′ are close according to some distance metric. An example x′ with this property is known as a *targeted* adversarial example. A less powerful attack also discussed in the literature instead asks for *untargeted* adversarial examples: instead of classifying x as a given target class, we only search for an input x′ so that C(x′)≠C∗(x) and x,x′ are close. Untargeted attacks are strictly less powerful than targeted attacks and we do not consider them in this paper. 333An untargeted attack is simply a more efficient (and often less accurate) method of running a targeted attack for each target and taking the closest. In this paper we focus on identifying the most accurate attacks, and do not consider untargeted attacks. Instead, we consider three different approaches for how to choose the target class, in a targeted attack: * *Average Case:* select the target class *uniformly at random* among the labels that are not the correct label. * *Best Case:* perform the attack against all incorrect classes, and report the target class that was *least difficult* to attack. * *Worst Case:* perform the attack against all incorrect classes, and report the target class that was *most difficult* to attack. In all of our evaluations we perform all three types of attacks: best-case, average-case, and worst-case. Notice that if a classifier is only accurate 80% of the time, then the best case attack will require a change of 0 in 20% of cases. On ImageNet, we approximate the best-case and worst-case attack by sampling 100 random target classes out of the 1,000 possible for efficiency reasons. ### Ii-D Distance Metrics In our definition of adversarial examples, we require use of a distance metric to quantify similarity. There are three widely-used distance metrics in the literature for generating adversarial examples, all of which are Lp norms. The Lp distance is written ∥x−x′∥p, where the p-norm ∥⋅∥p is defined as | | | | | --- | --- | --- | | | ∥v∥p=(n∑i=1|vi|p)1p. | | In more detail: 1. L0 distance measures the number of coordinates i such that xi≠x′i. Thus, the L0 distance corresponds to the number of pixels that have been altered in an image.444In RGB images, there are three channels that each can change. We count the number of *pixels* that are different, where two pixels are considered different if *any* of the three colors are different. We do not consider a distance metric where an attacker can change one color plane but not another meaningful. We relax this requirement when comparing to other L0 attacks that do not make this assumption to provide for a fair comparison. Papernot *et al.* argue for the use of the L0 distance metric, and it is the primary distance metric under which defensive distillation’s security is argued [[39](#bib.bib39)]. 2. L2 distance measures the standard Euclidean (root-mean-square) distance between x and x′. The L2 distance can remain small when there are many small changes to many pixels. This distance metric was used in the initial adversarial example work [[46](#bib.bib46)]. 3. L∞ distance measures the maximum change to any of the coordinates: | | | | | --- | --- | --- | | | ∥x−x′∥∞=max(|x1−x′1|,…,|xn−x′n|). | | For images, we can imagine there is a maximum budget, and each pixel is allowed to be changed by up to this limit, with no limit on the number of pixels that are modified. Goodfellow *et al.* argue that L∞ is the optimal distance metric to use [[47](#bib.bib47)] and in a follow-up paper Papernot *et al.* argue distillation is secure under this distance metric [[36](#bib.bib36)]. No distance metric is a perfect measure of human perceptual similarity, and we pass no judgement on exactly which distance metric is optimal. We believe constructing and evaluating a good distance metric is an important research question we leave to future work. However, since most existing work has picked one of these three distance metrics, and since defensive distillation argued security against two of these, we too use these distance metrics and construct attacks that perform superior to the state-of-the-art for each of these distance metrics. When reporting all numbers in this paper, we report using the distance metric as defined above, on the range [0,1]. (That is, changing a pixel in a greyscale image from full-on to full-off will result in L2 change of 1.0 and a L∞ change of 1.0, not 255.) ### Ii-E Defensive Distillation We briefly provide a high-level overview of defensive distillation. We provide a complete description later in Section [VIII](#S8 "VIII Evaluating Defensive Distillation ‣ Towards Evaluating the Robustness of Neural Networks"). To defensively distill a neural network, begin by first training a network with identical architecture on the training data in a standard manner. When we compute the softmax while training this network, replace it with a more-smooth version of the softmax (by dividing the logits by some constant T). At the end of training, generate the *soft training labels* by evaluating this network on each of the training instances and taking the output labels of the network. Then, throw out the first network and use only the soft training labels. With those, train a second network where instead of training it on the original training labels, use the soft labels. This trains the second model to behave like the first model, and the soft labels convey additional hidden knowledge learned by the first model. The key insight here is that by training to match the first network, we will hopefully avoid over-fitting against any of the training data. If the reason that neural networks exist is because neural networks are highly non-linear and have “blind spots” [[46](#bib.bib46)] where adversarial examples lie, then preventing this type of over-fitting might remove those blind spots. In fact, as we will see later, defensive distillation does not remove adversarial examples. One potential reason this may occur is that others [[11](#bib.bib11)] have argued the reason adversarial examples exist is not due to blind spots in a highly non-linear neural network, but due only to the locally-linear nature of neural networks. This so-called linearity hypothesis appears to be true [[47](#bib.bib47)], and under this explanation it is perhaps less surprising that distillation does not increase the robustness of neural networks. ### Ii-F Organization The remainder of this paper is structured as follows. In the next section, we survey existing attacks that have been proposed in the literature for generating adversarial examples, for the L2, L∞, and L0 distance metrics. We then describe our attack algorithms that target the same three distance metrics and provide superior results to the prior work. Having developed these attacks, we review defensive distillation in more detail and discuss why the existing attacks fail to find adversarial examples on defensively distilled networks. Finally, we attack defensive distillation with our new algorithms and show that it provides only limited value. Iii Attack Algorithms ---------------------- ### Iii-a L-Bfgs Szegedy *et al.* [[46](#bib.bib46)] generated adversarial examples using box-constrained L-BFGS. Given an image x, their method finds a different image x′ that is similar to x under L2 distance, yet is labeled differently by the classifier. They model the problem as a constrained minimization problem: | | | | | | --- | --- | --- | --- | | | minimize | ∥x−x′∥22 | | | | such that | C(x′)=l | | | | | x′∈[0,1]n | | This problem can be very difficult to solve, however, so Szegedy *et al.* instead solve the following problem: | | | | | | --- | --- | --- | --- | | | minimize | c⋅∥x−x′∥22+lossF,l(x′) | | | | such that | x′∈[0,1]n | | where lossF,l is a function mapping an image to a positive real number. One common loss function to use is cross-entropy. Line search is performed to find the constant c>0 that yields an adversarial example of minimum distance: in other words, we repeatedly solve this optimization problem for multiple values of c, adaptively updating c using bisection search or any other method for one-dimensional optimization. ### Iii-B Fast Gradient Sign The fast gradient sign [[11](#bib.bib11)] method has two key differences from the L-BFGS method: first, it is optimized for the L∞ distance metric, and second, it is designed primarily to be fast instead of producing very close adversarial examples. Given an image x the fast gradient sign method sets | | | | | --- | --- | --- | | | x′=x−ϵ⋅sign(∇lossF,t(x)), | | where ϵ is chosen to be sufficiently small so as to be undetectable, and t is the target label. Intuitively, for each pixel, the fast gradient sign method uses the gradient of the loss function to determine in which direction the pixel’s intensity should be changed (whether it should be increased or decreased) to minimize the loss function; then, it shifts all pixels simultaneously. It is important to note that the fast gradient sign attack was designed to be *fast*, rather than optimal. It is not meant to produce the minimal adversarial perturbations. #### Iterative Gradient Sign Kurakin *et al.* introduce a simple refinement of the fast gradient sign method [[26](#bib.bib26)] where instead of taking a single step of size ϵ in the direction of the gradient-sign, multiple smaller steps α are taken, and the result is clipped by the same ϵ. Specifically, begin by setting | | | | | --- | --- | --- | | | x′0=0 | | and then on each iteration | | | | | --- | --- | --- | | | x′i=x′i−1−clipϵ(α⋅sign(∇lossF,t(x′i−1))) | | Iterative gradient sign was found to produce superior results to fast gradient sign [[26](#bib.bib26)]. ### Iii-C Jsma Papernot *et al.* introduced an attack optimized under L0 distance [[38](#bib.bib38)] known as the Jacobian-based Saliency Map Attack (JSMA). We give a brief summary of their attack algorithm; for a complete description and motivation, we encourage the reader to read their original paper [[38](#bib.bib38)]. At a high level, the attack is a greedy algorithm that picks pixels to modify one at a time, increasing the target classification on each iteration. They use the gradient ∇Z(x)l to compute a *saliency map*, which models the impact each pixel has on the resulting classification. A large value indicates that changing it will significantly increase the likelihood of the model labeling the image as the target class l. Given the saliency map, it picks the most important pixel and modify it to increase the likelihood of class l. This is repeated until either more than a set threshold of pixels are modified which makes the attack detectable, or it succeeds in changing the classification. In more detail, we begin by defining the saliency map in terms of a pair of pixels p,q. Define | | | | | | --- | --- | --- | --- | | | αpq | =∑i∈{p,q}∂Z(x)t∂xi | | | | βpq | =⎛⎝∑i∈{p,q}∑j∂Z(x)j∂xi⎞⎠−αpq | | so that αpq represents how much changing both pixels p and q will change the target classification, and βpq represents how much changing p and q will change all other outputs. Then the algorithm picks | | | | | | --- | --- | --- | --- | | | (p∗,q∗) | =argmax(p,q) (−αpq⋅βpq)⋅(αpq>0)⋅(βpq<0) | | so that αpq>0 (the target class is more likely), βpq<0 (the other classes become less likely), and −αpq⋅βpq is largest. Notice that JSMA uses the output of the second-to-last layer Z, the logits, in the calculation of the gradient: the output of the softmax F is *not* used. We refer to this as the JSMA-Z attack. However, when the authors apply this attack to their defensively distilled networks, they modify the attack so it uses F instead of Z. In other words, their computation uses the output of the softmax (F) instead of the logits (Z). We refer to this modification as the JSMA-F attack.555We verified this via personal communication with the authors. When an image has multiple color channels (e.g., RGB), this attack considers the L0 difference to be 1 for each color channel changed independently (so that if all three color channels of one pixel change change, the L0 norm would be 3). While we do not believe this is a meaningful threat model, when comparing to this attack, we evaluate under both models. ### Iii-D Deepfool Deepfool [[34](#bib.bib34)] is an untargeted attack technique optimized for the L2 distance metric. It is efficient and produces closer adversarial examples than the L-BFGS approach discussed earlier. The authors construct Deepfool by imagining that the neural networks are totally linear, with a hyperplane separating each class from another. From this, they analytically derive the optimal solution to this simplified problem, and construct the adversarial example. Then, since neural networks are not actually linear, they take a step towards that solution, and repeat the process a second time. The search terminates when a true adversarial example is found. The exact formulation used is rather sophisticated; interested readers should refer to the original work [[34](#bib.bib34)]. | Layer Type | MNIST Model | CIFAR Model | | --- | --- | --- | | Convolution + ReLU | 3×3×32 | 3×3×64 | | Convolution + ReLU | 3×3×32 | 3×3×64 | | Max Pooling | 2×2 | 2×2 | | Convolution + ReLU | 3×3×64 | 3×3×128 | | Convolution + ReLU | 3×3×64 | 3×3×128 | | Max Pooling | 2×2 | 2×2 | | Fully Connected + ReLU | 200 | 256 | | Fully Connected + ReLU | 200 | 256 | | Softmax | 10 | 10 | TABLE I: Model architectures for the MNIST and CIFAR models. This architecture is identical to that of the original defensive distillation work. [[39](#bib.bib39)] | | | | | --- | --- | --- | | Parameter | MNIST Model | CIFAR Model | | Learning Rate | 0.1 | 0.01 (decay 0.5) | | Momentum | 0.9 | 0.9 (decay 0.5) | | Delay Rate | - | 10 epochs | | Dropout | 0.5 | 0.5 | | Batch Size | 128 | 128 | | Epochs | 50 | 50 | TABLE II: Model parameters for the MNIST and CIFAR models. These parameters are identical to that of the original defensive distillation work. [[39](#bib.bib39)] Iv Experimental Setup ---------------------- Before we develop our attack algorithms to break distillation, we describe how we train the models on which we will evaluate our attacks. We train two networks for the MNIST [[28](#bib.bib28)] and CIFAR-10 [[24](#bib.bib24)] classification tasks, and use one pre-trained network for the ImageNet classification task [[41](#bib.bib41)]. Our models and training approaches are identical to those presented in [[39](#bib.bib39)]. We achieve 99.5% accuracy on MNIST, comparable to the state of the art. On CIFAR-10, we achieve 80% accuracy, identical to the accuracy given in the distillation work. 666This is compared to the state-of-the-art result of 95% [[12](#bib.bib12), [44](#bib.bib44), [31](#bib.bib31)]. However, in order to provide the most accurate comparison to the original work, we feel it is important to reproduce their model architectures. *MNIST and CIFAR-10.* The model architecture is given in Table [I](#S3.T1 "TABLE I ‣ III-D Deepfool ‣ III Attack Algorithms ‣ Towards Evaluating the Robustness of Neural Networks") and the hyperparameters selected in Table [II](#S3.T2 "TABLE II ‣ III-D Deepfool ‣ III Attack Algorithms ‣ Towards Evaluating the Robustness of Neural Networks"). We use a momentum-based SGD optimizer during training. The CIFAR-10 model significantly overfits the training data even with dropout: we obtain a final training cross-entropy loss of 0.05 with accuracy 98%, compared to a validation loss of 1.2 with validation accuracy 80%. We do not alter the network by performing image augmentation or adding additional dropout as that was not done in [[39](#bib.bib39)]. *ImageNet.* Along with considering MNIST and CIFAR, which are both relatively small datasets, we also consider the ImageNet dataset. Instead of training our own ImageNet model, we use the pre-trained Inception v3 network [[45](#bib.bib45)], which achieves 96% top-5 accuracy (that is, the probability that the correct class is one of the five most likely as reported by the network is 96%). Inception takes images as 299×299×3 dimensional vectors. V Our Approach --------------- We now turn to our approach for constructing adversarial examples. To begin, we rely on the initial formulation of adversarial examples [[46](#bib.bib46)] and formally define the problem of finding an adversarial instance for an image x as follows: | | | | | | --- | --- | --- | --- | | | minimize | D(x,x+δ) | | | | such that | C(x+δ)=t | | | | | x+δ∈[0,1]n | | where x is fixed, and the goal is to find δ that minimizes D(x,x+δ). That is, we want to find some small change δ that we can make to an image x that will change its classification, but so that the result is still a valid image. Here D is some distance metric; for us, it will be either L0, L2, or L∞ as discussed earlier. We solve this problem by formulating it as an appropriate optimization instance that can be solved by existing optimization algorithms. There are many possible ways to do this; we explore the space of formulations and empirically identify which ones lead to the most effective attacks. ### V-a Objective Function The above formulation is difficult for existing algorithms to solve directly, as the constraint C(x+δ)=t is highly non-linear. Therefore, we express it in a different form that is better suited for optimization. We define an objective function f such that C(x+δ)=t if and only if f(x+δ)≤0. There are many possible choices for f: | | | | | | --- | --- | --- | --- | | | f1(x′) | =−lossF,t(x′)+1 | | | | f2(x′) | =(maxi≠t(F(x′)i)−F(x′)t)+ | | | | f3(x′) | =softplus(maxi≠t(F(x′)i)−F(x′)t)−log(2) | | | | f4(x′) | =(0.5−F(x′)t)+ | | | | f5(x′) | =−log(2F(x′)t−2) | | | | f6(x′) | =(maxi≠t(Z(x′)i)−Z(x′)t)+ | | | | f7(x′) | =softplus(maxi≠t(Z(x′)i)−Z(x′)t)−log(2) | | where s is the correct classification, (e)+ is short-hand for max(e,0), softplus(x)=log(1+exp(x)), and lossF,s(x) is the cross entropy loss for x. Notice that we have adjusted some of the above formula by adding a constant; we have done this only so that the function respects our definition. This does not impact the final result, as it just scales the minimization function. Now, instead of formulating the problem as | | | | | | --- | --- | --- | --- | | | minimize | D(x,x+δ) | | | | such that | f(x+δ)≤0 | | | | | x+δ∈[0,1]n | | we use the alternative formulation: | | | | | | --- | --- | --- | --- | | | minimize | D(x,x+δ)+c⋅f(x+δ) | | | | such that | x+δ∈[0,1]n | | where c>0 is a suitably chosen constant. These two are equivalent, in the sense that there exists c>0 such that the optimal solution to the latter matches the optimal solution to the former. After instantiating the distance metric D with an lp norm, the problem becomes: given x, find δ that solves | | | | | | --- | --- | --- | --- | | | minimize | ∥δ∥p+c⋅f(x+δ) | | | | such that | x+δ∈[0,1]n | | *Choosing the constant c.* Empirically, we have found that often the best way to choose c is to use the smallest value of c for which the resulting solution x∗ has f(x∗)≤0. This causes gradient descent to minimize both of the terms simultaneously instead of picking only one to optimize over first. ![Sensitivity on the constant ](https://media.arxiv-vanity.com/render-output/8045454/x1.png) Fig. 2: Sensitivity on the constant c. We plot the L2 distance of the adversarial example computed by gradient descent as a function of c, for objective function f6. When c<.1, the attack rarely succeeds. After c>1, the attack becomes less effective, but always succeeds. We verify this by running our f6 formulation (which we found most effective) for values of c spaced uniformly (on a log scale) from c=0.01 to c=100 on the MNIST dataset. We plot this line in Figure [2](#S5.F2 "Fig. 2 ‣ V-A Objective Function ‣ V Our Approach ‣ Towards Evaluating the Robustness of Neural Networks"). 777The corresponding figures for other objective functions are similar; we omit them for brevity. Further, we have found that if choose the smallest c such that f(x∗)≤0, the solution is within 5% of optimal 70% of the time, and within 30% of optimal 98% of the time, where “optimal” refers to the solution found using the best value of c. Therefore, in our implementations we use modified binary search to choose c. | | | | | | --- | --- | --- | --- | | | Best Case | Average Case | Worst Case | | | Change of | Clipped | Projected | Change of | Clipped | Projected | Change of | Clipped | Projected | | | Variable | Descent | Descent | Variable | Descent | Descent | Variable | Descent | Descent | | | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | | f1 | 2.46 | 100% | 2.93 | 100% | 2.31 | 100% | 4.35 | 100% | 5.21 | 100% | 4.11 | 100% | 7.76 | 100% | 9.48 | 100% | 7.37 | 100% | | f2 | 4.55 | 80% | 3.97 | 83% | 3.49 | 83% | 3.22 | 44% | 8.99 | 63% | 15.06 | 74% | 2.93 | 18% | 10.22 | 40% | 18.90 | 53% | | f3 | 4.54 | 77% | 4.07 | 81% | 3.76 | 82% | 3.47 | 44% | 9.55 | 63% | 15.84 | 74% | 3.09 | 17% | 11.91 | 41% | 24.01 | 59% | | f4 | 5.01 | 86% | 6.52 | 100% | 7.53 | 100% | 4.03 | 55% | 7.49 | 71% | 7.60 | 71% | 3.55 | 24% | 4.25 | 35% | 4.10 | 35% | | f5 | 1.97 | 100% | 2.20 | 100% | 1.94 | 100% | 3.58 | 100% | 4.20 | 100% | 3.47 | 100% | 6.42 | 100% | 7.86 | 100% | 6.12 | 100% | | f6 | 1.94 | 100% | 2.18 | 100% | 1.95 | 100% | 3.47 | 100% | 4.11 | 100% | 3.41 | 100% | 6.03 | 100% | 7.50 | 100% | 5.89 | 100% | | f7 | 1.96 | 100% | 2.21 | 100% | 1.94 | 100% | 3.53 | 100% | 4.14 | 100% | 3.43 | 100% | 6.20 | 100% | 7.57 | 100% | 5.94 | 100% | TABLE III: Evaluation of all combinations of one of the seven possible objective functions with one of the three box constraint encodings. We show the average L2 distortion, the standard deviation, and the success probability (fraction of instances for which an adversarial example can be found). Evaluated on 1000 random instances. When the success is not 100%, mean is for successful attacks only. ### V-B Box constraints To ensure the modification yields a valid image, we have a constraint on δ: we must have 0≤xi+δi≤1 for all i. In the optimization literature, this is known as a “box constraint.” Previous work uses a particular optimization algorithm, L-BFGS-B, which supports box constraints natively. We investigate three different methods of approaching this problem. 1. *Projected gradient descent* performs one step of standard gradient descent, and then clips all the coordinates to be within the box. This approach can work poorly for gradient descent approaches that have a complicated update step (for example, those with momentum): when we clip the actual xi, we unexpectedly change the input to the next iteration of the algorithm. 2. *Clipped gradient descent* does not clip xi on each iteration; rather, it incorporates the clipping into the objective function to be minimized. In other words, we replace f(x+δ) with f(min(max(x+δ,0),1)), with the min and max taken component-wise. While solving the main issue with projected gradient descent, clipping introduces a new problem: the algorithm can get stuck in a flat spot where it has increased some component xi to be substantially larger than the maximum allowed. When this happens, the partial derivative becomes zero, so even if some improvement is possible by later reducing xi, gradient descent has no way to detect this. 3. *Change of variables* introduces a new variable w and instead of optimizing over the variable δ defined above, we apply a change-of-variables and optimize over w, setting | | | | | --- | --- | --- | | | δi=12(tanh(wi)+1)−xi. | | Since −1≤tanh(wi)≤1, it follows that 0≤xi+δi≤1, so the solution will automatically be valid. 888Instead of scaling by 12 we scale by 12+ϵ to avoid dividing by zero. We can think of this approach as a smoothing of clipped gradient descent that eliminates the problem of getting stuck in extreme regions. These methods allow us to use other optimization algorithms that don’t natively support box constraints. We use the Adam [[23](#bib.bib23)] optimizer almost exclusively, as we have found it to be the most effective at quickly finding adversarial examples. We tried three solvers — standard gradient descent, gradient descent with momentum, and Adam — and all three produced identical-quality solutions. However, Adam converges substantially more quickly than the others. ### V-C Evaluation of approaches For each possible objective function f(⋅) and method to enforce the box constraint, we evaluate the quality of the adversarial examples found. To choose the optimal c, we perform 20 iterations of binary search over c. For each selected value of c, we run 10,000 iterations of gradient descent with the Adam optimizer. 999Adam converges to 95% of optimum within 1,000 iterations 92% of the time. For completeness we run it for 10,000 iterations at each step. The results of this analysis are in Table [III](#S5.T3 "TABLE III ‣ V-A Objective Function ‣ V Our Approach ‣ Towards Evaluating the Robustness of Neural Networks"). We evaluate the quality of the adversarial examples found on the MNIST and CIFAR datasets. The relative ordering of each objective function is identical between the two datasets, so for brevity we report only results for MNIST. There is a factor of three difference in quality between the best objective function and the worst. The choice of method for handling box constraints does not impact the quality of results as significantly for the best minimization functions. In fact, the worst performing objective function, cross entropy loss, is the approach that was most suggested in the literature previously [[46](#bib.bib46), [42](#bib.bib42)]. *Why are some loss functions better than others?* When c=0, gradient descent will not make any move away from the initial image. However, a large c often causes the initial steps of gradient descent to perform in an overly-greedy manner, only traveling in the direction which can most easily reduce f and ignoring the D loss — thus causing gradient descent to find sub-optimal solutions. This means that for loss function f1 and f4, there is no good constant c that is useful throughout the duration of the gradient descent search. Since the constant c weights the relative importance of the distance term and the loss term, in order for a fixed constant c to be useful, the relative value of these two terms should remain approximately equal. This is not the case for these two loss functions. To explain why this is the case, we will have to take a side discussion to analyze how adversarial examples exist. Consider a valid input x and an adversarial example x′ on a network. What does it look like as we linearly interpolate from x to x′? That is, let y=αx+(1−α)x′ for α∈[0,1]. It turns out the value of Z(⋅)t is mostly linear from the input to the adversarial example, and therefore the F(⋅)t is a logistic. We verify this fact empirically by constructing adversarial examples on the first 1,000 test images on both the MNIST and CIFAR dataset with our approach, and find the Pearson correlation coefficient r>.9. Given this, consider loss function f4 (the argument for f1 is similar). In order for the gradient descent attack to make any change initially, the constant c will have to be large enough that | | | | | --- | --- | --- | | | ϵ<c(f1(x+ϵ)−f1(x)) | | or, as ϵ→0, | | | | | --- | --- | --- | | | 1/c<|∇f1(x)| | | implying that c must be larger than the inverse of the gradient to make progress, but the gradient of f1 is identical to F(⋅)t so will be tiny around the initial image, meaning c will have to be extremely large. However, as soon as we leave the immediate vicinity of the initial image, the gradient of ∇f1(x+δ) increases at an exponential rate, making the large constant c cause gradient descent to perform in an overly greedy manner. We verify all of this theory empirically. When we run our attack trying constants chosen from 10−10 to 1010 the average constant for loss function f4 was 106. The average gradient of the loss function f1 around the valid image is 2−20 but 2−1 at the closest adversarial example. This means c is a million times larger than it has to be, causing the loss function f4 and f1 to perform worse than any of the others. ### V-D Discretization We model pixel intensities as a (continuous) real number in the range [0,1]. However, in a valid image, each pixel intensity must be a (discrete) integer in the range {0,1,…,255}. This additional requirement is not captured in our formulation. In practice, we ignore the integrality constraints, solve the continuous optimization problem, and then round to the nearest integer: the intensity of the ith pixel becomes ⌊255(xi+δi)⌉. This rounding will slightly degrade the quality of the adversarial example. If we need to restore the attack quality, we perform greedy search on the lattice defined by the discrete solutions by changing one pixel value at a time. This greedy search never failed for any of our attacks. Prior work has largely ignored the integrality constraints.101010One exception: The JSMA attack [[38](#bib.bib38)] handles this by only setting the output value to either 0 or 255. For instance, when using the fast gradient sign attack with ϵ=0.1 (i.e., changing pixel values by 10%), discretization rarely affects the success rate of the attack. In contrast, in our work, we are able to find attacks that make much smaller changes to the images, so discretization effects cannot be ignored. We take care to always generate valid images; when reporting the success rate of our attacks, they always are for attacks that include the discretization post-processing. Vi Our Three Attacks --------------------- | | | --- | | Target Classification (L2) | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | | | --- | | Source Classification | | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | ![Our ](https://media.arxiv-vanity.com/render-output/8045454/mnist_l2.png) Fig. 3: Our L2 adversary applied to the MNIST dataset performing a targeted attack for every source/target pair. Each digit is the first image in the dataset with that label. | | | --- | | Target Classification (L0) | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | | | --- | | Source Classification | | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | ![Our ](https://media.arxiv-vanity.com/render-output/8045454/mnist_l0.png) Fig. 4: Our L0 adversary applied to the MNIST dataset performing a targeted attack for every source/target pair. Each digit is the first image in the dataset with that label. ### Vi-a Our L2 Attack Putting these ideas together, we obtain a method for finding adversarial examples that will have low distortion in the L2 metric. Given x, we choose a target class t (such that we have t≠C∗(x)) and then search for w that solves | | | | | --- | --- | --- | | | minimize ∥12(tanh(w)+1)−x∥22+c⋅f(12(tanh(w)+1) | | with f defined as | | | | | --- | --- | --- | | | f(x′)=max(max{Z(x′)i:i≠t}−Z(x′)t,−κ). | | This f is based on the best objective function found earlier, modified slightly so that we can control the confidence with which the misclassification occurs by adjusting κ. The parameter κ encourages the solver to find an adversarial instance x′ that will be classified as class t with high confidence. We set κ=0 for our attacks but we note here that a side benefit of this formulation is it allows one to control for the desired confidence. This is discussed further in Section [VIII-D](#S8.SS4 "VIII-D Transferability ‣ VIII Evaluating Defensive Distillation ‣ Towards Evaluating the Robustness of Neural Networks"). Figure [3](#S6.F3 "Fig. 3 ‣ VI Our Three Attacks ‣ Towards Evaluating the Robustness of Neural Networks") shows this attack applied to our MNIST model for each source digit and target digit. Almost all attacks are visually indistinguishable from the original digit. A comparable figure (Figure [12](#A1.F12 "Fig. 12 ‣ Appendix A CIFAR-10 Source-Target Attacks ‣ Towards Evaluating the Robustness of Neural Networks")) for CIFAR is in the appendix. No attack is visually distinguishable from the baseline image. *Multiple starting-point gradient descent.* The main problem with gradient descent is that its greedy search is not guaranteed to find the optimal solution and can become stuck in a local minimum. To remedy this, we pick multiple random starting points close to the original image and run gradient descent from each of those points for a fixed number of iterations. We randomly sample points uniformly from the ball of radius r, where r is the closest adversarial example found so far. Starting from multiple starting points reduces the likelihood that gradient descent gets stuck in a bad local minimum. ### Vi-B Our L0 Attack The L0 distance metric is non-differentiable and therefore is ill-suited for standard gradient descent. Instead, we use an iterative algorithm that, in each iteration, identifies some pixels that don’t have much effect on the classifier output and then fixes those pixels, so their value will never be changed. The set of fixed pixels grows in each iteration until we have, by process of elimination, identified a minimal (but possibly not minimum) subset of pixels that can be modified to generate an adversarial example. In each iteration, we use our L2 attack to identify which pixels are unimportant. In more detail, on each iteration, we call the L2 adversary, restricted to only modify the pixels in the allowed set. Let δ be the solution returned from the L2 adversary on input image x, so that x+δ is an adversarial example. We compute g=∇f(x+δ) (the gradient of the objective function, evaluated at the adversarial instance). We then select the pixel i=argminigi⋅δi and fix i, i.e., remove i from the allowed set.111111Selecting the index i that minimizes δi is simpler, but it yields results with 1.5× higher L0 distortion. The intuition is that gi⋅δi tells us how much reduction to f(⋅) we obtain from the ith pixel of the image, when moving from x to x+δ: gi tells us how much reduction in f we obtain, per unit change to the ith pixel, and we multiply this by how much the ith pixel has changed. This process repeats until the L2 adversary fails to find an adversarial example. There is one final detail required to achieve strong results: choosing a constant c to use for the L2 adversary. To do this, we initially set c to a very low value (e.g., 10−4). We then run our L2 adversary at this c-value. If it fails, we double c and try again, until it is successful. We abort the search if c exceeds a fixed threshold (e.g., 1010). JSMA *grows* a set — initially empty — of pixels that are allowed to be changed and sets the pixels to maximize the total loss. In contrast, our attack *shrinks* the set of pixels — initially containing every pixel — that are allowed to be changed. Our algorithm is significantly more effective than JSMA (see Section [V](#S7.T5 "TABLE V ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks") for an evaluation). It is also efficient: we introduce optimizations that make it about as fast as our L2 attack with a single starting point on MNIST and CIFAR; it is substantially slower on ImageNet. Instead of starting gradient descent in each iteration from the initial image, we start the gradient descent from the solution found on the previous iteration (“warm-start”). This dramatically reduces the number of rounds of gradient descent needed during each iteration, as the solution with k pixels held constant is often very similar to the solution with k+1 pixels held constant. Figure [4](#S6.F4 "Fig. 4 ‣ VI Our Three Attacks ‣ Towards Evaluating the Robustness of Neural Networks") shows the L0 attack applied to one digit of each source class, targeting each target class, on the MNIST dataset. The attacks are visually noticeable, implying the L0 attack is more difficult than L2. Perhaps the worst case is that of a 7 being made to classify as a 6; interestingly, this attack for L2 is one of the only visually distinguishable attacks. A comparable figure (Figure [11](#A1.F11 "Fig. 11 ‣ Appendix A CIFAR-10 Source-Target Attacks ‣ Towards Evaluating the Robustness of Neural Networks")) for CIFAR is in the appendix. | | | --- | | Target Classification (L∞) | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | | | --- | | Source Classification | | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | ![Our ](https://media.arxiv-vanity.com/render-output/8045454/mnist_li.png) Fig. 5: Our L∞ adversary applied to the MNIST dataset performing a targeted attack for every source/target pair. Each digit is the first image in the dataset with that label. ### Vi-C Our L∞ Attack The L∞ distance metric is not fully differentiable and standard gradient descent does not perform well for it. We experimented with naively optimizing | | | | | --- | --- | --- | | | minimize c⋅f(x+δ)+∥δ∥∞ | | However, we found that gradient descent produces very poor results: the ∥δ∥∞ term only penalizes the largest (in absolute value) entry in δ and has no impact on any of the other. As such, gradient descent very quickly becomes stuck oscillating between two suboptimal solutions. Consider a case where δi=0.5 and δj=0.5−ϵ. The L∞ norm will only penalize δi, not δj, and ∂∂δj∥δ∥∞ will be zero at this point. Thus, the gradient imposes no penalty for increasing δj, even though it is already large. On the next iteration we might move to a position where δj is slightly larger than δi, say δi=0.5−ϵ′ and δj=0.5+ϵ′′, a mirror image of where we started. In other words, gradient descent may oscillate back and forth across the line δi=δj=0.5, making it nearly impossible to make progress. We resolve this issue using an iterative attack. We replace the L2 term in the objective function with a penalty for any terms that exceed τ (initially 1, decreasing in each iteration). This prevents oscillation, as this loss term penalizes all large values simultaneously. Specifically, in each iteration we solve | | | | | --- | --- | --- | | | minimize c⋅f(x+δ)+⋅∑i[(δi−τ)+] | | After each iteration, if δi<τ for all i, we reduce τ by a factor of 0.9 and repeat; otherwise, we terminate the search. Again we must choose a good constant c to use for the L∞ adversary. We take the same approach as we do for the L0 attack: initially set c to a very low value and run the L∞ adversary at this c-value. If it fails, we double c and try again, until it is successful. We abort the search if c exceeds a fixed threshold. Using “warm-start” for gradient descent in each iteration, this algorithm is about as fast as our L2 algorithm (with a single starting point). Figure [5](#S6.F5 "Fig. 5 ‣ VI-B Our L0 Attack ‣ VI Our Three Attacks ‣ Towards Evaluating the Robustness of Neural Networks") shows the L∞ attack applied to one digit of each source class, targeting each target class, on the MNSIT dataset. While most differences are not visually noticeable, a few are. Again, the worst case is that of a 7 being made to classify as a 6. A comparable figure (Figure [13](#A1.F13 "Fig. 13 ‣ Appendix A CIFAR-10 Source-Target Attacks ‣ Towards Evaluating the Robustness of Neural Networks")) for CIFAR is in the appendix. No attack is visually distinguishable from the baseline image. Vii Attack Evaluation ---------------------- | | | | | | --- | --- | --- | --- | | | Best Case | Average Case | Worst Case | | | MNIST | CIFAR | MNIST | CIFAR | MNIST | CIFAR | | | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | | Our L0 | 8.5 | 100% | 5.9 | 100% | 16 | 100% | 13 | 100% | 33 | 100% | 24 | 100% | | JSMA-Z | 20 | 100% | 20 | 100% | 56 | 100% | 58 | 100% | 180 | 98% | 150 | 100% | | JSMA-F | 17 | 100% | 25 | 100% | 45 | 100% | 110 | 100% | 100 | 100% | 240 | 100% | | Our L2 | 1.36 | 100% | 0.17 | 100% | 1.76 | 100% | 0.33 | 100% | 2.60 | 100% | 0.51 | 100% | | Deepfool | 2.11 | 100% | 0.85 | 100% | − | - | − | - | − | - | − | - | | Our L∞ | 0.13 | 100% | 0.0092 | 100% | 0.16 | 100% | 0.013 | 100% | 0.23 | 100% | 0.019 | 100% | | Fast Gradient Sign | 0.22 | 100% | 0.015 | 99% | 0.26 | 42% | 0.029 | 51% | − | 0% | 0.34 | 1% | | Iterative Gradient Sign | 0.14 | 100% | 0.0078 | 100% | 0.19 | 100% | 0.014 | 100% | 0.26 | 100% | 0.023 | 100% | TABLE IV: Comparison of the three variants of targeted attack to previous work for our MNIST and CIFAR models. When success rate is not 100%, the mean is only over successes. | | | | | | --- | --- | --- | --- | | | Untargeted | Average Case | Least Likely | | | mean | prob | mean | prob | mean | prob | | Our L0 | 48 | 100% | 410 | 100% | 5200 | 100% | | JSMA-Z | - | 0% | - | 0% | - | 0% | | JSMA-F | - | 0% | - | 0% | - | 0% | | Our L2 | 0.32 | 100% | 0.96 | 100% | 2.22 | 100% | | Deepfool | 0.91 | 100% | - | - | - | - | | Our L∞ | 0.004 | 100% | 0.006 | 100% | 0.01 | 100% | | FGS | 0.004 | 100% | 0.064 | 2% | - | 0% | | IGS | 0.004 | 100% | 0.01 | 99% | 0.03 | 98% | TABLE V: Comparison of the three variants of targeted attack to previous work for the Inception v3 model on ImageNet. When success rate is not 100%, the mean is only over successes. We compare our targeted attacks to the best results previously reported in prior publications, for each of the three distance metrics. We re-implement Deepfool, fast gradient sign, and iterative gradient sign. For fast gradient sign, we search over ϵ to find the smallest distance that generates an adversarial example; failures is returned if no ϵ produces the target class. Our iterative gradient sign method is similar: we search over ϵ (fixing α=1256) and return the smallest successful. For JSMA we use the implementation in CleverHans [[35](#bib.bib35)] with only slight modification (we improve performance by 50× with no impact on accuracy). JSMA is unable to run on ImageNet due to an inherent significant computational cost: recall that JSMA performs search for a pair of pixels p,q that can be changed together that make the target class more likely and other classes less likely. ImageNet represents images as 299×299×3 vectors, so searching over all pairs of pixels would require 236 work on each step of the calculation. If we remove the search over pairs of pixels, the success of JSMA falls off dramatically. We therefore report it as failing always on ImageNet. We report success if the attack produced an adversarial example with the correct target label, no matter how much change was required. Failure indicates the case where the attack was entirely unable to succeed. We evaluate on the first 1,000 images in the test set on CIFAR and MNSIT. On ImageNet, we report on 1,000 images that were initially classified correctly by Inception v3 121212Otherwise the best-case attack results would appear to succeed extremely often artificially low due to the relatively low top-1 accuracy. On ImageNet we approximate the best-case and worst-case results by choosing 100 target classes (10%) at random. The results are found in Table [IV](#S7.T4 "TABLE IV ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks") for MNIST and CIFAR, and Table [V](#S7.T5 "TABLE V ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks") for ImageNet. 131313The complete code to reproduce these tables and figures is available online at <http://nicholas.carlini.com/code/nn_robust_attacks>. For each distance metric, across all three datasets, our attacks find closer adversarial examples than the previous state-of-the-art attacks, and our attacks never fail to find an adversarial example. Our L0 and L2 attacks find adversarial examples with 2× to 10× lower distortion than the best previously published attacks, and succeed with 100% probability. Our L∞ attacks are comparable in quality to prior work, but their success rate is higher. Our L∞ attacks on ImageNet are so successful that we can change the classification of an image to any desired label by only flipping the lowest bit of each pixel, a change that would be impossible to detect visually. As the learning task becomes increasingly more difficult, the previous attacks produce worse results, due to the complexity of the model. In contrast, our attacks perform even better as the task complexity increases. We have found JSMA is unable to find targeted L0 adversarial examples on ImageNet, whereas ours is able to with 100% success. It is important to realize that the results between models are not directly comparable. For example, even though a L0 adversary must change 10 times as many pixels to switch an ImageNet classification compared to a MNIST classification, ImageNet has 114× as many pixels and so the *fraction of pixels* that must change is significantly smaller. | | | --- | | Target Classification | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | | | --- | | Distance Metric | | L∞ | L2 | L0 | ![Targeted attacks for each of the 10 MNIST digits where the starting image is totally black for each of the three distance metrics.](https://media.arxiv-vanity.com/render-output/8045454/create0.png) Fig. 6: Targeted attacks for each of the 10 MNIST digits where the starting image is totally black for each of the three distance metrics. | | | --- | | Target Classification | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | | | | --- | | Distance Metric | | L∞ | L2 | L0 | ![Targeted attacks for each of the 10 MNIST digits where the starting image is totally white for each of the three distance metrics.](https://media.arxiv-vanity.com/render-output/8045454/create1.png) Fig. 7: Targeted attacks for each of the 10 MNIST digits where the starting image is totally white for each of the three distance metrics. *Generating synthetic digits.* With our targeted adversary, we can start from *any* image we want and find adversarial examples of each given target. Using this, in Figure [6](#S7.F6 "Fig. 6 ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks") we show the minimum perturbation to an entirely-black image required to make it classify as each digit, for each of the distance metrics. This experiment was performed for the L0 task previously [[38](#bib.bib38)], however when mounting their attack, “for classes 0, 2, 3 and 5 one can clearly recognize the target digit.” With our more powerful attacks, none of the digits are recognizable. Figure [7](#S7.F7 "Fig. 7 ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks") performs the same analysis starting from an all-white image. Notice that the all-black image requires no change to become a digit 1 because it is initially classified as a 1, and the all-white image requires no change to become a 8 because the initial image is already an 8. *Runtime Analysis.* We believe there are two reasons why one may consider the runtime performance of adversarial example generation algorithms important: first, to understand if the performance would be prohibitive for an adversary to actually mount the attacks, and second, to be used as an inner loop in adversarial re-training [[11](#bib.bib11)]. Comparing the exact runtime of attacks can be misleading. For example, we have parallelized the implementation of our L2 adversary allowing it to run hundreds of attacks simultaneously on a GPU, increasing performance from 10× to 100×. However, we did not parallelize our L0 or L∞ attacks. Similarly, our implementation of fast gradient sign is parallelized, but JSMA is not. We therefore refrain from giving exact performance numbers because we believe an unfair comparison is worse than no comparison. All of our attacks, and all previous attacks, are plenty efficient to be used by an adversary. No attack takes longer than a few minutes to run on any given instance. When compared to L0, our attacks are 2×−10× slower than our optimized JSMA algorithm (and significantly faster than the un-optimized version). Our attacks are typically 10×−100× slower than previous attacks for L2 and L∞, with exception of iterative gradient sign which we are 10× slower. Viii Evaluating Defensive Distillation --------------------------------------- *Distillation* was initially proposed as an approach to reduce a large model (the *teacher*) down to a smaller *distilled* model [[19](#bib.bib19)]. At a high level, distillation works by first training the teacher model on the training set in a standard manner. Then, we use the teacher to label each instance in the training set with soft labels (the output vector from the teacher network). For example, while the hard label for an image of a hand-written digit 7 will say it is classified as a seven, the soft labels might say it has a 80% chance of being a seven and a 20% chance of being a one. Then, we train the distilled model on the soft labels from the teacher, rather than on the hard labels from the training set. Distillation can potentially increase accuracy on the test set as well as the rate at which the smaller model learns to predict the hard labels [[19](#bib.bib19), [30](#bib.bib30)]. *Defensive distillation* uses distillation in order to increase the robustness of a neural network, but with two significant changes. First, both the teacher model and the distilled model are identical in size — defensive distillation does not result in smaller models. Second, and more importantly, defensive distillation uses a large *distillation temperature* (described below) to force the distilled model to become more confident in its predictions. Recall that, the softmax function is the last layer of a neural network. Defensive distillation modifies the softmax function to also include a temperature constant T: | | | | | --- | --- | --- | | | softmax(x,T)i=exi/T∑jexj/T | | It is easy to see that softmax(x,T)=softmax(x/T,1). Intuitively, increasing the temperature causes a “softer” maximum, and decreasing it causes a “harder” maximum. As the limit of the temperature goes to 0, softmax approaches max; as the limit goes to infinity, softmax(x) approaches a uniform distribution. Defensive distillation proceeds in four steps: 1. Train a network, the teacher network, by setting the temperature of the softmax to T during the training phase. 2. Compute soft labels by apply the teacher network to each instance in the training set, again evaluating the softmax at temperature T. 3. Train the distilled network (a network with the same shape as the teacher network) on the soft labels, using softmax at temperature T. 4. Finally, when running the distilled network at test time (to classify new inputs), use temperature 1. | | | | | | --- | --- | --- | --- | | | Best Case | Average Case | Worst Case | | | MNIST | CIFAR | MNIST | CIFAR | MNIST | CIFAR | | | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | mean | prob | | Our L0 | 10 | 100% | 7.4 | 100% | 19 | 100% | 15 | 100% | 36 | 100% | 29 | 100% | | Our L2 | 1.7 | 100% | 0.36 | 100% | 2.2 | 100% | 0.60 | 100% | 2.9 | 100% | 0.92 | 100% | | Our L∞ | 0.14 | 100% | 0.002 | 100% | 0.18 | 100% | 0.023 | 100% | 0.25 | 100% | 0.038 | 100% | TABLE VI: Comparison of our attacks when applied to defensively distilled networks. Compare to Table [IV](#S7.T4 "TABLE IV ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks") for undistilled networks. ### Viii-a Fragility of existing attacks We briefly investigate the reason that existing attacks fail on distilled networks, and find that existing attacks are very fragile and can easily fail to find adversarial examples even when they exist. *L-BFGS and Deepfool* fail due to the fact that the gradient of F(⋅) is zero almost always, which prohibits the use of the standard objective function. When we train a distilled network at temperature T and then test it at temperature 1, we effectively cause the inputs to the softmax to become larger by a factor of T. By minimizing the cross entropy during training, the output of the softmax is forced to be close to 1.0 for the correct class and 0.0 for all others. Since Z(⋅) is divided by T, the distilled network will learn to make the Z(⋅) values T times larger than they otherwise would be. (Positive values are forced to become about T times larger; negative values are multiplied by a factor of about T and thus become even more negative.) Experimentally, we verified this fact: the mean value of the L1 norm of Z(⋅) (the logits) on the undistilled network is 5.8 with standard deviation 6.4; on the distilled network (with T=100), the mean is 482 with standard deviation 457. Because the values of Z(⋅) are 100 times larger, when we test at temperature 1, the output of F becomes ϵ in all components except for the output class which has confidence 1−9ϵ for some very small ϵ (for tasks with 10 classes). In fact, in most cases, ϵ is so small that the 32-bit floating-point value is rounded to 0. For similar reasons, the gradient is so small that it becomes 0 when expressed as a 32-bit floating-point value. This causes the L-BFGS minimization procedure to fail to make progress and terminate. If instead we run L-BFGS with our stable objective function identified earlier, rather than the objective function lossF,l(⋅) suggested by Szegedy *et al.* [[46](#bib.bib46)], L-BFGS does not fail. An alternate approach to fixing the attack would be to set | | | | | --- | --- | --- | | | F′(x)=softmax(Z(x)/T) | | where T is the distillation temperature chosen. Then minimizing lossF′,l(⋅) will not fail, as now the gradients do not vanish due to floating-point arithmetic rounding. This clearly demonstrates the fragility of using the loss function as the objective to minimize. *JSMA-F* (whereby we mean the attack uses the output of the final layer F(⋅)) fails for the same reason that L-BFGS fails: the output of the Z(⋅) layer is very large and so softmax becomes essentially a hard maximum. This is the version of the attack that Papernot *et al.* use to attack defensive distillation in their paper [[39](#bib.bib39)]. *JSMA-Z* (the attack that uses the logits) fails for a completely different reason. Recall that in the Z(⋅) version of the attack, we use the input to the softmax for computing the gradient instead of the final output of the network. This removes any potential issues with the gradient vanishing, however this introduces new issues. This version of the attack is introduced by Papernot *et al.* [[38](#bib.bib38)] but it is not used to attack distillation; we provide here an analysis of why it fails. Since this attack uses the Z values, it is important to realize the differences in relative impact. If the smallest input to the softmax layer is −100, then, after the softmax layer, the corresponding output becomes practically zero. If this input changes from −100 to −90, the output will still be practically zero. However, if the largest input to the softmax layer is 10, and it changes to 0, this will have a massive impact on the softmax output. Relating this to parameters used in their attack, α and β represent the size of the change at the input to the softmax layer. It is perhaps surprising that JSMA-Z works on un-distilled networks, as it treats all changes as being of equal importance, regardless of how much they change the softmax output. If changing a single pixel would increase the target class by 10, but also increase the least likely class by 15, the attack will not increase that pixel. Recall that distillation at temperature T causes the value of the logits to be T times larger. In effect, this magnifies the sub-optimality noted above as logits that are extremely unlikely but have slight variation can cause the attack to refuse to make any changes. *Fast Gradient Sign* fails at first for the same reason L-BFGS fails: the gradients are almost always zero. However, something interesting happens if we attempt the same division trick and divide the logits by T before feeding them to the softmax function: distillation still remains effective [[36](#bib.bib36)]. We are unable to explain this phenomenon. ### Viii-B Applying Our Attacks When we apply our attacks to defensively distilled networks, we find distillation provides only marginal value. We re-implement defensive distillation on MNIST and CIFAR-10 as described [[39](#bib.bib39)] using the same model we used for our evaluation above. We train our distilled model with temperature T=100, the value found to be most effective [[39](#bib.bib39)]. Table [VI](#S8.T6 "TABLE VI ‣ VIII Evaluating Defensive Distillation ‣ Towards Evaluating the Robustness of Neural Networks") shows our attacks when applied to distillation. All of the previous attacks fail to find adversarial examples. In contrast, our attack succeeds with 100% success probability for each of the three distance metrics. When compared to Table [IV](#S7.T4 "TABLE IV ‣ VII Attack Evaluation ‣ Towards Evaluating the Robustness of Neural Networks"), distillation has added almost no value: our L0 and L2 attacks perform slightly worse, and our L∞ attack performs approximately equally. All of our attacks succeed with 100% success. ### Viii-C Effect of Temperature In the original work, increasing the temperature was found to consistently reduce attack success rate. On MNIST, this goes from a 91% success rate at T=1 to a 24% success rate for T=5 and finally 0.5% success at T=100. We re-implement this experiment with our improved attacks to understand how the choice of temperature impacts robustness. We train models with the temperature varied from t=1 to t=100. When we re-run our implementation of JSMA, we observe the same effect: attack success rapidly decreases. However, with our improved L2 attack, we see no effect of temperature on the mean distance to adversarial examples: the correlation coefficient is ρ=−0.05. This clearly demonstrates the fact that increasing the distillation temperature does not increase the robustness of the neural network, it only causes existing attacks to fail more often. ![Mean distance to targeted (with random target) adversarial examples for different distillation temperatures on MNIST. Temperature is uncorrelated with mean adversarial example distance.](https://media.arxiv-vanity.com/render-output/8045454/x2.png) Fig. 8: Mean distance to targeted (with random target) adversarial examples for different distillation temperatures on MNIST. Temperature is uncorrelated with mean adversarial example distance. ![Probability that adversarial examples transfer from one model to another, for both targeted (the adversarial class remains the same) and untargeted (the image is not the correct class).](https://media.arxiv-vanity.com/render-output/8045454/x3.png) Fig. 9: Probability that adversarial examples transfer from one model to another, for both targeted (the adversarial class remains the same) and untargeted (the image is not the correct class). ![Probability that adversarial examples transfer from the baseline model to a model trained with defensive distillation at temperature 100.](https://media.arxiv-vanity.com/render-output/8045454/x4.png) Fig. 10: Probability that adversarial examples transfer from the baseline model to a model trained with defensive distillation at temperature 100. ### Viii-D Transferability Recent work has shown that an adversarial example for one model will often *transfer* to be an adversarial on a different model, even if they are trained on different sets of training data [[46](#bib.bib46), [11](#bib.bib11)], and even if they use entirely different algorithms (i.e., adversarial examples on neural networks transfer to random forests [[37](#bib.bib37)]). Therefore, any defense that is able to provide robustness against adversarial examples *must* somehow break this transferability property; otherwise, we could run our attack algorithm on an easy-to-attack model, and then transfer those adversarial examples to the hard-to-attack model. Even though defensive distillation is not robust to our stronger attacks, we demonstrate a second break of distillation by transferring attacks from a standard model to a defensively distilled model. We accomplish this by finding *high-confidence adversarial examples*, which we define as adversarial examples that are strongly misclassified by the original model. Instead of looking for an adversarial example that just barely changes the classification from the source to the target, we want one where the target is much more likely than any other label. Recall the loss function defined earlier for L2 attacks: | | | | | --- | --- | --- | | | f(x′)=max(max{Z(x′)i:i≠t}−Z(x′)t,−κ). | | The purpose of the parameter κ is to control the strength of adversarial examples: the larger κ, the stronger the classification of the adversarial example. This allows us to generate high-confidence adversarial examples by increasing κ. We first investigate if our hypothesis is true that the stronger the classification on the first model, the more likely it will transfer. We do this by varying κ from 0 to 40. Our baseline experiment uses two models trained on MNIST as described in Section [IV](#S4 "IV Experimental Setup ‣ Towards Evaluating the Robustness of Neural Networks"), with each model trained on half of the training data. We find that the transferability success rate increases linearly from κ=0 to κ=20 and then plateaus at near-100% success for κ≈20, so clearly increasing κ increases the probability of a successful transferable attack. We then run this same experiment only instead we train the second model with defensive distillation, and find that adversarial examples *do* transfer. This gives us another attack technique for finding adversarial examples on distilled networks. However, interestingly, the transferability success rate between the unsecured model and the distilled model only reaches 100% success at κ=40, in comparison to the previous approach that only required κ=20. We believe that this approach can be used in general to evaluate the robustness of defenses, even if the defense is able to completely block flow of gradients to cause our gradient-descent based approaches from succeeding. Ix Conclusion -------------- The existence of adversarial examples limits the areas in which deep learning can be applied. It is an open problem to construct defenses that are robust to adversarial examples. In an attempt to solve this problem, defensive distillation was proposed as a general-purpose procedure to increase the robustness of an arbitrary neural network. In this paper, we propose powerful attacks that defeat defensive distillation, demonstrating that our attacks more generally can be used to evaluate the efficacy of potential defenses. By systematically evaluating many possible attack approaches, we settle on one that can consistently find better adversarial examples than all existing approaches. We use this evaluation as the basis of our three L0, L2, and L∞ attacks. We encourage those who create defenses to perform the two evaluation approaches we use in this paper: * Use a powerful attack (such as the ones proposed in this paper) to evaluate the robustness of the secured model directly. Since a defense that prevents our L2 attack will prevent our other attacks, defenders should make sure to establish robustness against the L2 distance metric. * Demonstrate that transferability fails by constructing high-confidence adversarial examples on a unsecured model and showing they fail to transfer to the secured model. Acknowledgements ---------------- We would like to thank Nicolas Papernot discussing our defensive distillation implementation, and the anonymous reviewers for their helpful feedback. This work was supported by Intel through the ISTC for Secure Computing, Qualcomm, Cisco, the AFOSR under MURI award FA9550-12-1-0040, and the Hewlett Foundation through the Center for Long-Term Cybersecurity.
2d249803-de99-46b0-9457-270cb7155603
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI labs' statements on governance This is a collection of statements on government policy, regulation, and standards from leading AI labs and their leadership. As of 7 *August 2023*, I believe this post has all of the relevant announcements/blogposts from the three labs it covers, but I expect it is missing a couple relevant speeches/interviews with lab leadership.[[1]](#fn7i8u3ix4nqf) Suggestions are welcome. My quotes tend to focus on AI safety rather than other governance goals. Within sections, sources are roughly sorted by priority. OpenAI ------ ### [Governance of superintelligence](https://openai.com/blog/governance-of-superintelligence) (May 2023) > First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year. > > And of course, individual companies should be held to an extremely high standard of acting responsibly. > > Second, we are likely to eventually need something like an [IAEA](https://www.iaea.org/) for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say. > > ### [Planning for AGI and beyond](https://openai.com/blog/planning-for-agi-and-beyond) (Feb 2023) > We think it's important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it's important that major world governments have insight about training runs above a certain scale. > > ### Altman Senate testimony (May 2023) [Written testimony](https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf) (before the hearing): > There are several areas I would like to flag where I believe that AI companies and governments can partner productively. > > First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements. > > Second, AI is a complex and rapidly evolving field. It is essential that the safety requirements that AI companies must meet have a governance regime flexible enough to adapt to new technical developments. The U.S. government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration. > > Third, we are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting. > > [Questions for the Record](https://www.judiciary.senate.gov/imo/media/doc/2023-05-16_-_qfr_responses_-_altman.pdf) (after the hearing): > *What are the most important factors for Congress to consider when crafting legislation to regulate artificial intelligence?* . . . *What specific guardrails and/or regulations do you support that would allow society to benefit from advances in artificial intelligence while minimizing potential risks?* [Altman gave identical answers to these two questions] > > Any new laws related to AI will become part of a complex legal and policy landscape. A wide range of existing laws already apply to AI, including to our products. And in sectors like medicine, education, and employment, policy stakeholders have already begun to adapt existing laws to take account of the ways that AI impacts those fields. We look forward to contributing to the development of a balanced approach that addresses the risks from AI while also enabling Americans and people around the world to benefit from this technology. > > We strongly support efforts to harmonize the emergent accountability expectations for AI, including the efforts of the NIST AI Risk Management Framework, the U.S.-E.U. Trade and Technology Council, and a range of other global initiatives. While these efforts continue to progress, and even before new laws are fully implemented, we see a role for ourselves and other companies to make voluntary commitments on issues such as pre-deployment testing, content provenance, and trust and safety. > > We are already doing significant work on responsible and safe approaches to developing and deploying our models, including through red-teaming and quantitative evaluation of potentially dangerous model capabilities and risks. We report on these efforts primarily through a published document that we currently call a System Card. We are refining these approaches in tandem with the broader public policy discussion. > > For future generations of the most highly capable foundation models, which are likely to prove more capable than models that have been previously shown to be safe, we support the development of registration, disclosure, and licensing requirements. Such disclosure could help provide policymakers with the necessary visibility to design effective regulatory solutions, and get ahead of trends at the frontier of AI progress. To be beneficial and not create new risks, it is crucial that any such regimes prioritize the security of the information disclosed. Licensure is common in safety-critical and other high-risk contexts, such as air travel, power generation, drug manufacturing, and banking. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards. > > . . . > > *During the hearing, you testified that "a new framework" is necessary for imposing liability for harms caused by artificial intelligence—separate from Section 230 of the Communications Decency Act—and offered to "work together" to develop this framework. What features do you consider most important for a liability framework for artificial Intelligence?* > > Any new framework should apportion responsibility in such a way that AI services, companies who build on AI services, and users themselves appropriately share responsibility for the choices that they each control and can make, and have appropriate incentives to take steps to avoid harm. > > OpenAI disallows the use of our models and tools for certain activities and content, as outlined in our [usage policies](https://platform.openai.com/docs/usage-policies/use-case-policy%205%20https://openai.com/policies/terms-of-use). These policies are designed to prohibit the use of our models and tools in ways that may cause individual or societal harm. We update these policies in response to new risks and updated information about how our models are being used. Access to and use of our models are also subject to OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) which, among other things, prohibit the use of our services to harm people's rights, and prohibit presenting output from our services as being human-generated when it was not. > > One important consideration for any liability framework is the level of discretion that should be granted to companies like OpenAI, and people who develop services using these technologies, in determining the level of freedom granted to users. If liability frameworks are overly restrictive, the capabilities that are offered to users could in turn be heavily censored or restricted, leading to potentially stifling outcomes and negative implications for many of the beneficial capabilities of AI, including free speech and education. However, if liability frameworks are too lax, negative externalities may appear where a company benefits from lack of oversight and regulation at the expense of the overall good of society. One of the critical features of any liability framework is to attempt to find and continually refine this balance. > > Given these realities, it would be helpful for an assignment of rights and responsibilities related to harms to recognize that the results of AI systems are not solely determined by these systems, but instead respond to human-driven commands. For example, a framework should take into account the degree to which each actor in the chain of events that resulted in the harm took deliberate actions, such as whether a developer clearly stipulated allowed/disallowed usages or developed reasonable safeguards, and whether a user disregarded usage rules or acted to overcome such safeguards. > > AI services should also be encouraged to ensure a baseline of safety and risk disclosures for our products to minimize potential harm. This thinking underlies our approach of putting our systems through safety training and testing prior to release, frank disclosures of risk and mitigations, and enforcement against misuse. Care should be taken to ensure that liability frameworks do not inadvertently create unintended incentives for AI providers to reduce the scope or visibility of such disclosures. > > Furthermore, many of the highest-impact uses of new AI tools are likely to take place in specific sectors that are already covered by sector-specific laws and regulations, such as health, financial services and education. Any new liability regime should take into consideration the extent to which existing frameworks could be applied to AI technologies as an interpretive matter. To the extent new or additional rules are needed, they would need to be harmonized with these existing laws. > > [Hearing transcript](https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai/): > [Blumenthal asked Altman "the effect on jobs . . . is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is, and whether you share that concern." His reply only mentioned jobs. Marcus noted that "Sam's worst fear I do not think is employment. And he never told us what his worst fear actually is. And I think it's germane to find out." Altman vaguely replied about "significant harm to the world."] > > . . . > > I think the US should lead here and do things first, but to be effective we do need something global. . . . There is precedent--I know it sounds naive to call for something like this, and it sounds really hard--there is precedent. We've done it before with the IAEA. We've talked about doing it for other technologies. Given what it takes to make these models--the chip supply chain, the limited number of competitive GPUs, the power the US has over these companies--I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world. > > . . . > > *Do you agree with me that the simplest way and the most effective way [to implement licensing of AI tools] is to have an agency that is more nimble and smarter than Congress . . . [overseeing] what you do?* > > We'd be enthusiastic about that. > > . . . > > *I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying. . . . Please tell me in plain English, two or three reforms, regulations, if any, that you would, you would implement if you were queen or king for a day.* > > Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards . . . as the dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn't compliance with these stated safety thresholds and these percentages of performance on question X or Y. > > . . . > > I'm a believer in defense in depth. I think that there should be limits on what a deployed model is capable of, and then what it actually does too. > > . . . > > *Would you pause any further development for six months or longer?* > > So first of all, after we finished training GPT-4, we waited more than six months to deploy it. We are not currently training what will be GPT-5. We don't have plans to do it in the next six months. But I think the frame of the letter is wrong. What matters is audits, red teaming, safety standards that a model needs to pass before training. If we pause for six months, then I'm not really sure what we do then-- do we pause for another six? Do we kind of come up with some rules then? The standards that we have developed and that we've used for GPT-4 deployment, we want want to build on those, but we think that's the right direction, not a calendar clock pause. There may be times--I expect there will be times--when we find something that we don't understand and we really do need to take a pause, but we don't see that yet. Nevermind all the benefits. > > *You don't see what yet? You're comfortable with all of the potential ramifications from the current existing technology?* > > I'm sorry. We don't see the reasons to not train a new one. For deploying, as I mentioned, I think there's all sorts of risky behavior and there's limits we put, we have to pull things back sometimes, add new ones. I meant we don't see something that would stop us from training the next model, where we'd be so worried that we'd create something dangerous even in that process, let alone the deployment that would happen. > > ### [NTIA comment](https://www.regulations.gov/document/NTIA-2023-0005-1245) (Jun 2023) > **OpenAI's Current Approaches** > > We are refining our practices in tandem with the evolving broader public conversation. Here we provide details on several aspects of our approach. > > *System Cards* > > Transparency is an important element of building accountable AI systems. A key part of our approach to accountability is publishing a document that we currently call a System Card, for new AI systems that we deploy. Our approach draws inspiration from previous research work on [model cards](https://arxiv.org/abs/1810.03993) and [system cards](https://montrealethics.ai/system-cards-for-ai-based-decision-making-for-public-policy/). To date, OpenAI has published two system cards: the [GPT-4 System Card](https://cdn.openai.com/papers/gpt-4-system-card.pdf) and [DALL-E 2 System Card](https://github.com/openai/dalle-2-preview/blob/main/system-card.md). > > We believe that in most cases, it is important for these documents to analyze and describe the impacts of a system – rather than focusing solely on the model itself – because a system's impacts depend in part on factors other than the model, including use case, context, and real world interactions. Likewise, an AI system's impacts depend on risk mitigations such as use policies, access controls, and monitoring for abuse. We believe it is reasonable for external stakeholders to expect information on these topics, and to have the opportunity to understand our approach. > > Our System Cards aim to inform readers about key factors impacting the system's behavior, especially in areas pertinent for responsible usage. We have found that the value of System Cards and similar documents stems not only from the overview of model performance issues they provide, but also from the illustrative examples they offer. Such examples can give users and developers a more grounded understanding of the described system's performance and risks, and of the steps we take to mitigate those risks. Preparation of these documents also helps shape our internal practices, and illustrates those practices for others seeking ways to operationalize responsible approaches to AI. > > *Qualitative Model Evaluations via Red Teaming* > > Red teaming is the process of qualitatively testing our models and systems in a variety of domains to create a more holistic view of the safety profile of our models. We conduct red-teaming internally with our own staff as part of model development, as well as with people who operate independently of the team that builds the system being tested. In addition to probing our organization's capabilities and resilience to attacks, red teams also use stress testing and boundary testing methods, which focus on surfacing edge cases and other potential failure modes with potential to cause harm. > > Red teaming is complementary to automated, quantitative evaluations of model capabilities and risks that we also conduct, which we describe in the next section. It can shed light on risks that are not yet quantifiable, or those for which more standardized evaluations have not yet been developed. Our prior work on red teaming is described in the DALL-E 2 System Card and the GPT-4 System Card. > > Our red teaming and testing is generally conducted during the development phase of a new model or system. Separately from our own internal testing, we recruit testers outside of OpenAI and provide them with early access to a system that is under development. Testers are selected by OpenAI based on prior work in the domains of interest (research or practical expertise), and have tended to be a combination of academic researchers and industry professionals (e.g, people with work experience in Trust & Safety settings). We evaluate and validate results of these tests, and take steps to make adjustments and deploy mitigations where appropriate. > > OpenAI continues to take steps to improve the quality, diversity, and experience of external testers for ongoing and future assessments. > > *Quantitative Model Evaluations* > > In addition to the qualitative red teaming described above, we create automated, quantitative evaluations for various capabilities and safety oriented risks, including risks that we find via methods like red teaming. These evaluations allow us to compare different versions of our models with each other, iterate on research methodologies that improve safety, and ultimately act as an input into decision-making about which model versions we choose to deploy. Existing evaluations span topics such as erotic content, hateful content, and content related to self-harm among others, and measure the propensity of the models to generate such content. > > *Usage Policies* > > OpenAI disallows the use of our models and tools for certain activities and content, as outlined in our [usage policies](https://platform.openai.com/docs/usage-policies/use-case-policy). These policies are designed to prohibit the use of our models and tools in ways that cause individual or societal harm. We update these policies in response to new risks and updated information about how our models are being used. Access to and use of our models are also subject to OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) which, among other things, prohibit the use of our services to harm people's rights, and prohibit presenting output from our services as being human-generated when it was not. > > We take steps to limit the use of our models for harmful activities by teaching models to refuse to respond to certain types of requests that may lead to potentially harmful responses. In addition, we use a mix of reviewers and automated systems to identify and take action against misuse of our models. Our automated systems include a suite of machine learning and rule-based classifier detections designed to identify content that might violate our policies. When a user repeatedly prompts our models with policy-violating content, we take actions such as issuing a warning, temporarily suspending the user, or in severe cases, banning the user. > > **Open Challenges in AI Accountability** > > As discussed in the RFC, there are many important questions related to AI Accountability that are not yet resolved. In the sections that follow, we provide additional perspective on several of these questions.  > > *Assessing Potentially Dangerous Capabilities* > > Highly capable foundation models have both beneficial capabilities, as well as the potential to cause harm. As the capabilities of these models get more advanced, so do the scale and severity of the risks they may pose, particularly if under direction from a malicious actor or if the model is not properly aligned with human values. > > Rigorously measuring advances in potentially dangerous capabilities is essential for effectively assessing and managing risk. We are addressing this by exploring and building evaluations for potentially dangerous capabilities that range from simple, scalable, and automated tools to bespoke, intensive evaluations performed by human experts. We are collaborating with academic and industry experts, and ultimately aim to contribute to the development of a diverse suite of evaluations that can contribute to the formation of best practices for assessing emerging risks in highly capable foundation models. We believe dangerous capability evaluations are an increasingly important building block for accountability and governance in frontier AI development. > > *Open Questions About Independent Assessments* > > Independent assessments of models and systems, including by third parties, may be increasingly valuable as model capabilities continue to increase. Such assessments can strengthen accountability and transparency about the behaviors and risks of AI systems. > > Some forms of assessment can occur within a single organization, such as when a team assesses its own work or when a team or part of the organization produces a model and another team or part, acting independently, tests that model. A different approach is to have an external third party conduct an assessment. As described above, we currently rely on a mixture of internal and external evaluations of our models. > > Third-party assessments may focus on specific deployments, a model or system at some moment in time, organizational governance and risk management practices, specific applications of a model or system, or some combination thereof. The thinking and potential frameworks to be used in such assessments continue to evolve rapidly, and we are monitoring and considering our own approach to assessments. > > For any third-party assessment, the process of selecting auditors/assessors with appropriate expertise and incentive structures would benefit from further clarity. In addition, selecting the appropriate expectations against which to assess organizations or models is an open area of exploration that will require inputs from different stakeholders. Finally, it will be important for assessments to consider how systems might evolve over time and build that into the process of an assessment / audit. > > *Registration and Licensing for Highly Capable Foundation Models* > > We support the development of registration and licensing requirements for future generations of the most highly capable foundation models. Such models may have sufficiently dangerous capabilities to pose significant risks to public safety; if they do, we believe they should be subject to commensurate accountability requirements. > > It could be appropriate to consider disclosure and registration expectations for training processes that are expected to produce highly capable foundation models. Such disclosure could help enable policymakers with the necessary visibility to design effective regulatory solutions, and get ahead of trends at the frontier of AI progress. It is crucial that any such regimes prioritize the security of the information disclosed. > > AI developers could be required to receive a license to create highly capable foundation models which are likely to prove more capable than models previously shown to be safe. Licensure is common in safety-critical and other high-risk contexts, such as air travel, power generation, drug manufacturing, and banking. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards; indeed, many of the accountability practices that the NTIA will be considering could be appropriate licensure requirements. Introducing licensure requirements at the computing provider level could also be a powerful complementary tool for enforcement. > > There remain many open questions in the design of registration and licensing mechanisms for achieving accountability at the frontier of AI development. We look forward to collaborating with policymakers in addressing these questions. > > ### [Altman interview](https://youtu.be/A5uMNMAWi3E?t=324) (Bloomberg, Jun 2023) > At this point, given how much people see the economic benefits and potential, no company could stop it. But global regulation-- which I only think should be on these powerful, existential-risk-level systems-- global regulation is hard, and you don't want to overdo it for sure, but I think global regulation can help make it safe, which is a better answer than stopping it, and I also don't think stopping it would work. . . . > > We for example don't think small startups and open-source models below a certain very high capability threshold should be subject to a lot of regulation. We've seen what happens to countries that try to overregulate tech; I don't think that's what we want here. But also we think it is super important that as we think about a system that could be at a [high risk level], that we have a global and as coordinated a response as possible. . . . > > [*What*](https://youtu.be/A5uMNMAWi3E?t=455) *do you think about the certification system of AI models that the Biden administration has proposed?* > > I think there's some version of that that's really good. I think that people training models that are way above– any model scale that we have today, but above some certain capability threshold– I think you should need to go through a certification process for that. I think there should be external audits and safety tests. > > ### [Frontier AI regulation](https://openai.com/research/frontier-ai-regulation) (Jul 2023) Note: some authors are affiliated with OpenAI, including Jade Leung and Miles Brundage, two governance leads. Some authors are affiliated with Google DeepMind. This paper is listed under OpenAI since OpenAI includes it on their Research page. It's not clear how much OpenAI endorses it. > Self-regulation is unlikely to provide sufficient protection against the risks from frontier AI models: government intervention will be needed. We explore options for such intervention. These include: > > * **Mechanisms to create and update safety standards** for responsible frontier AI development and deployment. These should be developed via multi-stakeholder processes, and could include standards relevant to foundation models overall, not exclusive to frontier AI. These processes should facilitate rapid iteration to keep pace with the technology. > * **Mechanisms to give regulators visibility** into frontier AI development, such as disclosure regimes, monitoring processes, and whistleblower protections. These equip regulators with the information needed to address the appropriate regulatory targets and design effective tools for governing frontier AI. The information provided would pertain to qualifying frontier AI development processes, models, and applications. > * **Mechanisms to ensure compliance with safety standards.** Self-regulatory efforts, such as voluntary certification, may go some way toward ensuring compliance with safety standards by frontier AI model developers. However, this seems likely to be insufficient without government intervention, for example by empowering a supervisory authority to identify and sanction non-compliance; or by licensing the deployment and potentially the development of frontier AI. Designing these regimes to be well-balanced is a difficult challenge; we should be sensitive to the risks of overregulation and stymieing innovation on the one hand, and moving too slowly relative to the pace of AI progress on the other. > > Next, we describe an initial set of safety standards that, if adopted, would provide some guardrails on the development and deployment of frontier AI models. Versions of these could also be adopted for current AI models to guard against a range of risks. We suggest that at minimum, safety standards for frontier AI development should include: > > * **Conducting thorough risk assessments informed by evaluations of dangerous capabilities and controllability.** This would reduce the risk that deployed models possess unknown dangerous capabilities, or behave unpredictably and unreliably. > * **Engaging external experts to apply independent scrutiny to models.** External scrutiny of the safety and risk profile of models would both improve assessment rigor and foster accountability to the public interest. > * **Following standardized protocols for how frontier AI models can be deployed based on their assessed risk.** The results from risk assessments should determine whether and how the model is deployed, and what safeguards are put in place. This could range from deploying the model without restriction to not deploying it at all. In many cases, an intermediate option—deployment with appropriate safeguards (e.g., more post-training that makes the model more likely to avoid risky instructions)—may be appropriate. > * **Monitoring and responding to new information on model capabilities.** The assessed risk of deployed frontier AI models may change over time due to new information, and new post-deployment enhancement techniques. If significant information on model capabilities is discovered post-deployment, risk assessments should be repeated, and deployment safeguards updated. > > Going forward, frontier AI models seem likely to warrant safety standards more stringent than those imposed on most other AI models, given the prospective risks they pose. Examples of such standards include: avoiding large jumps in capabilities between model generations; adopting state-of-the-art alignment techniques; and conducting pre-training risk assessments. Such practices are nascent today, and need further development. > > ### [Altman interview](https://nymag.com/intelligencer/2023/03/on-with-kara-swisher-sam-altman-on-the-ai-revolution.html) (NYmag, Mar 2023) > I think the thing that I would like to see happen immediately is just much more insight into what companies like ours are doing, companies that are training above a certain level of capability at a minimum. A thing that I think could happen now is the government should just have insight into the capabilities of our latest stuff, released or not, what our internal audit procedures and external audits we use look like, how we collect our data, how we're red-teaming these systems, what we expect to happen, which we may be totally wrong about. ["What I mean is government auditors sitting in our buildings."] We could hit a wall anytime, but our internal road-map documents, when we start a big training run, I think there could be government insight into that. And then if that can start now– I do think good regulation takes a long time to develop. It's a real process. They can figure out how they want to have oversight. . . . > > Those efforts probably do need a new regulatory effort, and I think it needs to be a global regulatory body. And then people who are using AI, like we talked about, as a medical adviser, I think the FDA can give probably very great medical regulation, but they'll have to update it for the inclusion of AI. But I would say creation of the systems and having something like an IAEA that regulates that is one thing, and then having existing industry regulators still do their regulation [Ed: he was cut off] . . . . > > *Section 230 doesn't seem to cover generative AI. Is that a problem?* > > I think we will need a new law for use of this stuff, and I think the liability will need to have a few different frameworks. If someone is tweaking the models themselves, I think it's going to have to be the last person who touches it has the liability, and that's — > > *But it's not full immunity that the platform's getting —* > > I don't think we should have full immunity. Now, that said, I understand why you want limits on it, why you do want companies to be able to experiment with this, you want users to be able to get the experience they want, but the idea of no one having any limits for generative AI, for AI in general, that feels super-wrong. > > ### [Brockman House testimony](https://www.govinfo.gov/content/pkg/CHRG-115hhrg30877/pdf/CHRG-115hhrg30877.pdf) (Jun 2018) [Written testimony](https://www.congress.gov/115/meeting/house/108474/witnesses/HHRG-115-SY15-Wstate-BrockmanG-20180626.pdf): > Policy recommendations > > 1. **Measurement**. Many other established voices in the field have tried to combat panic about AGI by instead saying it not something to worry about or is unfathomably far off. We recommend neither panic nor a lack of caution. Instead, we recommend investing more resources into understanding where the field is, how quickly progress is accelerating, and what roadblocks might lie ahead. We’re exploring this problem via our own research and support of initiatives like the AI Index. But there’s much work to be done, and we are available to work with governments around the world to support their own measurement and assessment initiatives — for instance, we participated in a GAO-led study on AI last year. > > 2. **Foundation for international coordination.** AGI’s impact, like that of the Internet before it, won’t track national boundaries. Successfully using AGI to make the world better for people, while simultaneously preventing rogue actors from abusing it, will require international coordination of some form. Policymakers today should invest in creating the foundations for successful international coordination in AI, and recognize that the more adversarial the climate in which AGI is created, the less likely we are to achieve a good outcome. We think the most practical place to start is actually with the measurement initiatives: each government working on measurement will create teams of people who have a strong motivation to talk to their international counterparts to harmonize measurement schemes and develop global standards. > > ### [Brockman Senate testimony](https://www.govinfo.gov/content/pkg/CHRG-114shrg24175/pdf/CHRG-114shrg24175.pdf) (Nov 2016) Anthropic --------- ### [Charting a Path to AI Accountability](https://www.anthropic.com/index/charting-a-path-to-ai-accountability) (Jun 2023) Anthropic's [NTIA comment](https://cdn2.assets-servd.host/anthropic-website/production/images/Anthropic-NTIA-Comment.pdf) is a longer version of this blogpost. > There is currently no robust and comprehensive process for evaluating today's advanced artificial intelligence (AI) systems, let alone the more capable systems of the future. Our submission presents our perspective on the processes and infrastructure needed to ensure AI accountability. Our recommendations consider the NTIA's potential role as a coordinating body that sets standards in collaboration with other government agencies like the [National Institute of Standards and Technology (NIST)](https://www.anthropic.com/index/an-ai-policy-tool-for-today-ambitiously-invest-in-nist). > > In our recommendations, we focus on accountability mechanisms suitable for highly capable and general-purpose AI models. Specifically, we recommend: > > * **Fund research to build better evaluations** > + Increase funding for AI model evaluation research. Developing rigorous, standardized evaluations is difficult and time-consuming work that requires significant resources. Increased funding, especially from government agencies, could help drive progress in this critical area. > + Require companies in the near-term to disclose evaluation methods and results. Companies deploying AI systems should be mandated to satisfy some disclosure requirements with regard to their evaluations, though these requirements need not be made public if doing so would compromise intellectual property (IP) or confidential information. This transparency could help researchers and policymakers better understand where existing evaluations may be lacking. > + Develop in the long term a set of industry evaluation standards and best practices. Government agencies like NIST could work to establish standards and benchmarks for evaluating AI models' capabilities, limitations, and risks that companies would comply with. > * **Create risk-responsive assessments based on model capabilities** > + Develop standard capabilities evaluations for AI systems. Governments should fund and participate in the development of rigorous capability and safety evaluations targeted at critical risks from advanced AI, such as deception and autonomy. These evaluations can provide an evidence-based foundation for proportionate, risk-responsive regulation. > + Develop a risk threshold through more research and funding into safety evaluations. Once a risk threshold has been established, we can mandate evaluations for all models against this threshold. > - If a model falls below this risk threshold, existing safety standards are likely sufficient. Verify compliance and deploy. > - If a model exceeds the risk threshold and safety assessments and mitigations are insufficient, halt deployment, significantly strengthen oversight, and notify regulators. Determine appropriate safeguards before allowing deployment. > * **Establish pre-registration for large AI training runs** > + Establish a process for AI developers to report large training runs ensuring that regulators are aware of potential risks. This involves determining the appropriate recipient, required information, and appropriate cybersecurity, confidentiality, IP, and privacy safeguards. > + Establish a confidential registry for AI developers conducting large training runs to pre-register model details with their home country's national government (e.g., model specifications, model type, compute infrastructure, intended training completion date, and safety plans) before training commences. Aggregated registry data should be protected to the highest available standards and specifications. > * **Empower third party auditors that are…** > + **Technically literate** – at least some auditors will need deep machine learning experience; > + **Security-conscious** – well-positioned to protect valuable IP, which could pose a national security threat if stolen; and > + **Flexible** – able to conduct robust but lightweight assessments that catch threats without undermining US competitiveness. > * **Mandate external red teaming before model release** > + Mandate external red teaming for AI systems, either through a centralized third party (e.g., NIST) or in a decentralized manner (e.g., via researcher API access) to standardize adversarial testing of AI systems. This should be a precondition for developers who are releasing advanced AI systems. > + Establish high-quality external red teaming options before they become a precondition for model release. This is critical as red teaming talent currently resides almost exclusively within private AI labs. > * **Advance interpretability research** > + Increase funding for interpretability research. Provide government grants and incentives for interpretability work at universities, nonprofits, and companies. This would allow meaningful work to be done on smaller models, enabling progress outside frontier labs. > + Recognize that regulations demanding interpretable models would currently be infeasible to meet, but may be possible in the future pending research advances. > * **Enable industry collaboration on AI safety via clarity around antitrust** > + Regulators should issue guidance on permissible AI industry safety coordination given current antitrust laws. Clarifying how private companies can work together in the public interest without violating antitrust laws would mitigate legal uncertainty and advance shared goals. > > We believe this set of recommendations will bring us meaningfully closer to establishing an effective framework for AI accountability. Doing so will require collaboration between researchers, AI labs, regulators, auditors, and other stakeholders. Anthropic is committed to supporting efforts to enable the safe development and deployment of AI systems. Evaluations, red teaming, standards, interpretability and other safety research, auditing, and strong cybersecurity practices are all promising avenues for mitigating the risks of AI while realizing its benefits. > > We believe that AI could have transformative effects in our lifetime and we want to ensure that these effects are positive. The creation of robust AI accountability and auditing mechanisms will be vital to realizing this goal. > > ### Dario Amodei Senate testimony (Jul 2023) [Written testimony](https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf) (before the hearing): > I will devote most of this prepared testimony to discussing the risks of AI, including what I believe to be extraordinarily grave threats to US national security over the next 2 to 3 years. . . . > > The *medium-term risks* are where I would most like to draw the subcommittee's attention. Simply put, a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology. . . . > > **Policy Recommendations** > > In our view these concerns merit an urgent policy response. The ideal policy response would address not just the specific risks we've identified above, but would at the same time provide a framework for addressing as many other risks as possible – without, of course, hampering innovation more than is necessary. We recommend three broad classes of policies: > > * First, the U.S. must **secure the AI supply chain**, in order to maintain its lead while keeping these technologies out of the hands of bad actors. This supply chain runs all the way from semiconductor manufacturing equipment to AI models stored on the servers of companies like ours. A number of governments have taken steps in this regard. Specifically, the critical supply chain includes: > + Semiconductor manufacturing equipment, such as lithography machines. > + Chips used for training AI systems, such as GPUs. > + Trained AI systems, which are vulnerable to "export" through cybertheft or uncontrolled release. > - Companies such as Anthropic and others developing frontier AI systems should have to comply with stringent cybersecurity standards in how they store their AI systems. We have shared with the U.S. government and other labs our views of appropriate cybersecurity best practices, and are moving to implement these practices ourselves. > * Second, we recommend a **"testing and auditing regime" for new and more powerful models**. Similar to cars or airplanes, we should consider the AI models of the near future to be powerful machines which possess great utility, but that can be lethal if designed badly or misused. New AI models should have to pass a rigorous battery of safety tests both during development and before being released to the public or to customers. > + National security risks such as misuse of biology, cybersystems, or radiological materials should have top priority in testing due to the mix of imminence and severity of threat. > + However, the tests could also cover other concerns such as bias, potential to create misinformation, privacy, child safety, and respect for copyright. > + Similarly, the tests could measure the capacity for autonomous systems to escape control, beginning to get a handle on the risks of future systems. There are already nonprofit organizations, such as the Alignment Research Center, attempting to develop such tests. > + It is important that testing and auditing happen at regular checkpoints during the process of training powerful models to identify potentially dangerous capabilities or other risks so that they can be mitigated before training progresses too far. > + The recent voluntary commitments announced by the White House commit some companies (including Anthropic) to do this type of testing, but legislation could go further by mandating these tests for all models and requiring that they pass according to certain standards before deployment. > + It is worth stating clearly that given the current difficulty of controlling AI systems even where safety is prioritized, there is a real possibility that these rigorous standards would lead to a substantial slowdown in AI development, and that this may be a necessary outcome. Ideally, however, the standards would catalyze innovation in safety rather than slowing progress, as companies race to become the first company technologically capable of safely deploying tomorrow's AI systems. > * Third, we should recognize that the science of testing and auditing for AI systems is in its infancy, and much less developed than it is for airplanes and automobiles. In particular, it is not currently easy to entirely understand what bad behaviors an AI system is capable of, without broadly deploying it to users. Thus, it is important to **fund both measurement and research on measurement**, to ensure a testing and auditing regime is actually effective. > + Our suggestion for the agency to oversee this process is NIST, whose mandate focuses explicitly on measurement and evaluation. However many other agencies could also contribute expertise and structure to this work. > + Anthropic has been a vocal supporter of the proposed National AI Research Resource (NAIRR). The NAIRR could, among other purposes, be used to fund research on measurement, evaluation, and testing, and could do so in the public interest rather than tied to a corporation. > > The three directions above are synergistic: responsible supply chain policies help give America enough breathing room to impose rigorous standards on our own companies, without ceding our national lead. Funding measurement in turn makes these rigorous standards meaningful. > > In conclusion, it is essential that we mitigate the grave national security risks presented by near-future AI systems, while also maintaining our lead in this critical technology and reaping the benefits of its advancement. > > [Hearing transcript](https://forum.effectivealtruism.org/posts/67zFQT4GeJdgvdFuk/partial-transcript-of-recent-senate-hearing-discussing-ai-x): [I haven't gone through this; see also [this](https://aiwatchtower.substack.com/p/humanity-is-back-in-the-office).] ### [[Expand NIST]](https://cdn2.assets-servd.host/anthropic-website/production/images/Anthropic_NIST_v3.pdf) (Apr 2023) This is a policy memo; there is also a corresponding [blogpost](https://www.anthropic.com/index/an-ai-policy-tool-for-today-ambitiously-invest-in-nist). It follows up on the following source. It also succeeds [Clark Senate testimony](https://www.commerce.senate.gov/services/files/F7BFA181-1B1B-4933-A815-70043413A7FF) (Sep 2022). > With this additional resourcing, NIST could continue and expand its work on AI assurance efforts like: > > * Cataloging existing AI evaluations and benchmarks used in industry and academia > * Investigating the scientific validity of existing evaluations (e.g., adherence to quality control practices, effects of technical implementation choices on evaluation results, etc.) > * Designing novel evaluations that address limitations of existing evaluations > * Developing technical standards for how to identify vulnerabilities in open-ended systems > * Developing disclosure standards to enhance transparency around complex AI systems > * Partnering with allies on international standards to promote multilateral interoperability > * Further developing and updating the AI Risk Management Framework > > More resourcing will allow NIST to build out much-needed testing environments for today's generative AI systems. > > ### [Frontier Model Security](https://www.anthropic.com/index/frontier-model-security) (Jul 2023) > Future advanced AI models have the potential to upend economic and national security affairs within and among nation-states. Given the strategic nature of this technology, frontier AI research and models must be secured to levels far exceeding standard practices for other commercial technologies in order to protect them from theft or misuse. > > In the near term, governments and frontier AI labs must be ready to protect advanced models and model weights, and the research that feeds into them. This should include measures such as the development of robust best practices widely diffused among industry, as well as treating the advanced AI sector as something akin to "critical infrastructure" in terms of the level of public-private partnership in securing these models and the companies developing them.  > > Many of these measures can begin as voluntary arrangements, but in time it may be appropriate to use government procurement or regulatory powers to mandate compliance. . . . > > We encourage extending [SSDF](https://csrc.nist.gov/Projects/ssdf) to encompass model development inside of NIST's standard-setting process. > > In the near term, these two best practices [viz. multi-party authorization and secure model development framework] could be established as procurement requirements applying to AI companies and cloud providers contracting with governments – alongside standard cybersecurity practices that also apply to these companies. As U.S. cloud providers provide the infrastructure that many current frontier model companies use, procurement requirements will have an effect similar to broad market regulation and can work in advance of regulatory requirements. > > ### [Comment on "Study To Advance a More Productive Tech Economy"](https://www.regulations.gov/comment/NIST-2021-0007-0055) (Feb 2022) Followed up on by the 'Expand NIST' sources. > The past decade of AI development charts a future course of increasingly large, high performing industry models that can be adapted for a wide variety of applications. Without intervention or investment however, we risk a future where AI development and oversight is controlled by a handful of actors, motivated primarily by commercial priorities. To ensure these systems drive a more productive and broadly beneficial economy, we must expand access and representation in their creation and evaluation. > > A robust assurance ecosystem would help increase public confidence in AI technology, enable a more competitive R&D environment, and foster a stronger U.S. economy. > > *The federal government can support this by:* > > * Increasing funding for academic researchers to access compute resources through efforts such as the National AI Research Resource (NAIRR) and the University Technology Center Program proposed in the United States Innovation and Competition Act (USICA) > * Providing financial grants to researchers, especially those currently underrepresented, who are developing assurance indicators in areas such as bias and fairness or novel forms of AI system oversight > * Prioritizing the development of AI testbeds, centralized datasets, and standardized testing protocols > * Identifying evaluations created by independent researchers and creating a catalog of validated tests > * Standardizing the essential components of self-designed evaluations and establishing norms for how evaluation results should be disclosed > Google DeepMind --------------- ### [NTIA comment](https://www.regulations.gov/document/NTIA-2023-0005-1308) (Google and Google DeepMind, Jun 2023) > While it is tempting to look for silver-bullet policy solutions, AI raises complex questions that require nuanced answers. It is a 21st century technology that requires a 21st century governance model. We need a multi-layered, multi-stakeholder approach to AI governance. This will include: > > * Industry, civil society, and academic experts developing and sharing best practices and technical standards for responsible AI, including around safety and misinformation issues; > * A hub-and-spoke model of national regulation; and > * International coordination among allies and partners, including around geopolitical > security and competitiveness and alignment on regulatory approaches. > > At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a "Department of AI." AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works beer than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed. > > Maximizing the economic opportunity from AI will also require a joint effort across federal, state, and local governments, the private sector, and civil society to equip workers to harness AI-driven tools. AI is likely to generate significant economy-wide benets. At the same time, to mitigate displacement risks, the private sector will need to develop proof-of-concept efforts on skilling, training, and continuing education, while the public sector can help validate and scale these efforts to ensure workers have wrap-around support. Smart deployment of AI coupled with thoughtful policy choices and an adaptive safety net can ensure that AI ultimately leads to higher wages and better living standards. > > With respect to U.S. regulation to promote accountability, we urge policymakers to: > > * **Promote enabling legislation for AI innovation leadership.**Federal policymakers can eliminate legal barriers to AI accountability efforts, including by establishing competition safe harbors for open public-private and cross-industry collaboration on AI safety research, and clarifying the liability for misuse and abuse of AI systems by different users (e.g., researchers, authors, creators of AI systems, implementers, and end users). Policymakers should also consider related legal frameworks that support innovation, such as adopting a uniform national privacy law that protects personal information and an AI model's incidental use of publicly available information. > * **Support proportionate, risk-based accountability measures.**Deployers of high-risk AI systems should provide documentation about their systems and undergo independent risk assessments focused on specific applications. > * **Regulate under a "hub-and-spoke" model rather than creating a new AI regulator.**Under this model, regulators across the government would engage a central, coordinating agency with AI expertise, such as NIST, with Oce of Management and Budget (OMB) support, for technical guidance on best practices on AI accountability. > * **Use existing authorities to expedite governance and align AI and traditional rules.**Where appropriate, sectoral regulators would provide **updates clarifying**how existing authorities apply to the use of AI systems, as well as how organizations can demonstrate compliance of an AI system with these existing regulations. > * **Assign to AI deployers the responsibility of assessing the risk of their unique deployments**, auditing, and other accountability mechanisms as a result of their unparalleled awareness of their specific uses and related risks of the AI system. > * **Define appropriate accountability metrics and benchmarks**, as well as terms that may be ambiguous, to guide compliance. Recognize that many existing systems are imperfect and that even imperfect AI systems may, in some settings, be able to improve service levels, reduce costs, or increase affordability and availability. > * **Consider the tradeoffs between different policy objectives**, including efficiency and productivity enhancements, transparency, fairness, privacy, security, and resilience. > * **Design regulation to promote competitiveness, responsible innovation, and broad access to the economic benefits of AI.** > * **Require high standards of cybersecurity protections (including access controls) and develop targeted "next-generation" trade control policies.** > * **Avoid requiring disclosures that include trade secrets or confidential information (potentially advantaging adversaries)**or stymie this innovative sector as it continues to evolve. > * **Prepare the American workforce**for AI-driven job transitions and promote opportunities to broadly share AI's benets. > Finally, NTIA asks how policymakers can otherwise advance AI accountability. The U.S. government should: > * **Continue building technical and human capacity into the ecosystem to enable effective risk management.**The government should deepen investment in fundamental responsible AI research (including bias and human-centered systems design) through federal agency initiatives, research centers, and foundations, as well as by creating and supporting public-private partnerships. > * **Drive international policy alignment, working with allies and partners to develop common approaches that reflect democratic values.**Policymakers can support common standards and frameworks that enable interoperability and harmonize global AI governance approaches. This can be done by: (1) enabling trusted data flows across national borders, (2) establishing multinational AI research resources, (3) encouraging the adoption of common approaches to AI regulation and governance and a common lexicon, based on the work of the Organisation for Economic Co-operation and Development (OECD), (4) working within standard-setting bodies such as the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) to establish rules, benchmarks, and governance mechanisms that can serve as a baseline for domestic regulatory approaches and deter regulatory fragmentation, (5) using trade and economic agreements to support the development of consistent and non-discriminatory AI regulations, (6) promoting copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models, while supporting workable opt-outs for websites, and (7) establishing more effective mechanisms for information and best-practice sharing among allies and between the private and the public sectors. > * **Explore updating procurement rules to incentivize AI accountability, and ensure OMB and the Federal Acquisition Regulatory Council are engaged in any such updates.**It will be critical for agencies who are further ahead in their development of AI procurement practices to remain coordinated and aligned upon a common baseline to effectively scale responsible governance (e.g., through the NIST AI Risk Management Framework (AI RMF)). > > The United States currently leads the world in AI development, and with the right policies that support both trustworthy AI and innovation, the United States can continue to lead and help allies enhance their own competitiveness while aligning around a positive and responsible vision for AI. Centering policies around economic opportunity, promoting responsibility and trust, and furthering our collective security will advance today's and tomorrow's AI innovation and unleash benets across society. > > ### [Exploring institutions for global AI governance](https://www.deepmind.com/blog/exploring-institutions-for-global-ai-governance) (Jul 2023) Note: this is a Google DeepMind blogpost about the paper [International Institutions for Advanced AI](https://arxiv.org/abs/2307.04699). Some authors of the paper are affiliated with Google DeepMind. One author is affiliated with OpenAI. It's not clear how much Google DeepMind endorses it. > We explore four complementary institutional models to support global coordination and governance functions: > > * An intergovernmental **Commission on Frontier AI** could build international consensus on opportunities and risks from advanced AI and how they may be managed. This would increase public awareness and understanding of AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers. > * An intergovernmental or multi-stakeholder **Advanced AI Governance Organisation** could help internationalise and align efforts to address global risks from advanced AI systems by setting governance norms and standards and assisting in their implementation. It may also perform compliance monitoring functions for any international governance regime. > * A **Frontier AI Collaborative** could promote access to advanced AI as an international public-private partnership. In doing so, it would help underserved societies benefit from cutting-edge AI technology and promote international access to AI technology for safety and governance objectives. > * An **AI Safety Project** could bring together leading researchers and engineers, and provide them with access to computation resources and advanced AI models for research into technical mitigations of AI risks. This would promote AI safety research and development by increasing its scale, resourcing, and coordination. > ### [Hassabis interview](https://www.nytimes.com/2023/07/11/podcasts/transcript-ezra-klein-interviews-demis-hassabis.html) (Klein, Jul 2023) > *If we're getting to a point where somebody is getting near something like a general intelligence system, is that too powerful a technology to be in private hands? Should this be something that whichever corporate entity gets there first controls? Or do we need something else to govern it?* > > My personal view is that this is such a big thing in its fullness of time. I think it's bigger than any one corporation or even one nation. I think it needs international cooperation. I've often talked in the past about a CERN-like effort for A.G.I., and I quite like to see something like that as we get closer, maybe in many years from now, to an A.G.I. system, where really careful research is done on the safety side of things, understanding what these systems can do, and maybe testing them in controlled conditions, like simulations or games first, like sandboxes, very robust sandboxes with lots of cybersecurity protection around them. I think that would be a good way forward as we get closer towards human-level A.I. systems. > > Stuff besides statements ------------------------ Labs do some policy advocacy in private. I mostly don't know what their lobbying is like. It's probably important. Open letters related to governance: * [CAIS: Statement on AI Risk](https://www.safe.ai/statement-on-ai-risk) (May 2023) + "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." + The CEOs of OpenAI, Anthropic, and Google DeepMind signed. + 59 from Google DeepMind, 28 from OpenAI, and 15 from Anthropic signed. * [FLI: Pause Giant AI Experiments](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) (Mar 2023) + It was not signed by the leadership of OpenAI, Anthropic, or DeepMind. + It appears to have 8 signatories from DeepMind, 3 from OpenAI, and none from Anthropic. Not all signatures were authenticated. Labs sometimes do research relevant to governance, which matters directly and gives evidence about their attitudes: * DeepMind + [An early warning system for novel AI risks](https://www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks) (May 2023) - Coauthors include governance leadership of OpenAI and Anthropic - This helps support a *mandatory safety evals during large training runs* regime. Lab leadership sometimes tweets about their attitudes (very nonexhaustive): * Anthropic + Jack Clark - [Tweet](https://twitter.com/jackclarkSF/status/1673073311444791297) (Jun 2023) * "if best ideas for AI policy involve depriving people of the 'means of production' of AI (e.g H100s), then you don't have a hugely viable policy . . . . policy which looks like picking winners is basically bad policy, and compute controls (and related ideas like 'licensing') have this problem. [And a public option is supposed to help somehow.]" * This is right in part but seems largely confused/bad to me, and it's not clear how Clark proposes solving the *maybe it will be possible to train dangerous models with moderate amounts of hardware* problem. But I'm inclined to let him elaborate before passing judgment. - [Tweet](https://twitter.com/jackclarkSF/status/1673369486869811201) (Jun 2023) * "A world where we can push a button and stop larger compute things being built and all focus on safety for a while is good. I'm not sure also the compute control stuff gets you that and there are ways to game it, so need effort on other ideas also. . . . A total frontier ban is fine, it's just that where you and I probably have different worldviews is in how you make the ban work. If we could wave a wand and guarantee everyone worldwide stops doing stuff at the frontier for a while and redirects to safety, then that's good." - [Tweet](https://twitter.com/jackclarkSF/status/1676570216959205376) (Jul 2023) * "New essay: [What should the UK’s £100 million Foundation Model Taskforce do?](https://jack-clark.net/2023/07/05/what-should-the-uks-100-million-foundation-model-taskforce-do/) tl;dr: the UK has a unique opportunity to gain policy leverage and improve safety of AI landscape by having FM taskforce eval AI models for misuses and alignment risks. In this highly specific proposal I try to lay out exactly what the FM taskforce should do, list different projects and priorities, and sketch out staffing for such an initiative. My basic position is once you can evaluate AI systems you can gain leverage in policy. Most AI policy is confused or fuzzy because you aren't able to evaluate an AI system for various properties. This is also why the developers of AI go into all policy conversations with asymmetric information - they know how to eval their own systems for some stuff. If we want a better 'political economy of AI' it probably starts with reducing this information asymmetry by having govs and other third-parties develop ability to eval AI systems, ranging from proprietary models to open source ones." * OpenAI + Greg Brockman - [Tweet](https://twitter.com/gdb/status/1646183424024268800) (Apr 2023) * "We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments, be accompanied by increasingly-sophisticated predictions of their capability and impact, and require best practices such as dangerous capability testing. We think governance of large-scale compute usage, safety standards, and regulation of/lesson-sharing from deployment are good ideas, but the details really matter and should adapt over time as the technology evolves. It's also important to address the whole spectrum of risks from present-day issues (e.g. preventing misuse or self-harm, mitigating bias) to longer-term existential ones." Labs sometimes take actions relevant to governance (not exhaustive): * OpenAI and Anthropic work with ARC Evals to check their models for dangerous capabilities before deployment + [Update on ARC's recent eval efforts](https://evals.alignment.org/blog/2023-03-18-update-on-recent-evals/) (ARC Evals, Mar 2023) + This helps support a *mandatory safety evals during large training runs* regime. Other sources (using strikethrough to communicate that this is lower-priority than everything else in this post): * [~~Exclusive: OpenAI Lobbied E.U. to Water Down AI Regulation~~](https://time.com/6288245/openai-eu-lobbying-ai-act/) ~~(Time, Jun 2023)~~ + ~~This is not clearly bad but tentatively seems slightly bad. Slightly more so since it appears that they avoided talking about this publicly.~~ + ~~Maybe Google did something similar:~~[~~Big Tech Is Already Lobbying to Water Down Europe's AI Rules~~](https://time.com/6273694/ai-regulation-europe/) ~~(Time, Apr 2023).~~ + ~~An interviewer said "the EU is considering labeling ChatGPT high-risk" and Altman~~[~~replied~~](https://nymag.com/intelligencer/2023/03/on-with-kara-swisher-sam-altman-on-the-ai-revolution.html) ~~'I have followed the development of the EU's AI Act, but it changed. It's obviously still in development. I don't know enough about the current version of it to say this definition of what high-risk is and this way of classifying it, this is what you have to do. I don't know if I would say that's good or bad. I think totally banning this stuff is not the right answer, and I think that not regulating this stuff at all is not the right answer either. And so the question is, is that going to end in the right balance? I think if the EU is saying, "No one in Europe gets to use ChatGPT." Probably not what I would do, but if the EU is saying, "Here's the restrictions on ChatGPT and any service like it." There's plenty of versions of that I could imagine that are super-sensible.'~~ * [~~Google challenges OpenAI's calls for government AI czar~~](https://www.cnbc.com/2023/06/13/google-challenges-openais-calls-for-government-ai-czar.html) ~~(CNBC, Jun 2023)~~ + ~~'While OpenAI CEO Sam Altman touted the idea of a new government agency focused on AI to deal with its complexities and license the technology, Google said it preferred a "multi-layered, multi-stakeholder approach to AI governance."'~~ + ~~The best approach is not clear to me.~~ Other collections & analysis ---------------------------- * [Lab Statements on AI Governance](https://docs.google.com/document/d/1KknXf11a-DQuxvcephn6tJ_Bh07JPGZ__m6ss3HAuHg/edit?usp=sharing) (GovAI, Jul 2023) 1. **[^](#fnref7i8u3ix4nqf)**Potential sources not [yet added / worth adding]: Lots of governance papers by governance people at labs. Including some listed on labs' research pages and probably some with corresponding blogposts: - <https://openai.com/research/improving-verifiability> - <https://openai.com/research/preparing-for-malicious-uses-of-ai> --- Adjacent to *statements on governance* is *statements on AI to policy people*. E.g. <https://jack-clark.net/2023/07/18/ai-safety-and-corporate-power-remarks-given-at-the-united-states-security-council/>.  --- [Import AI](https://importai.substack.com/) (and the rest of <https://jack-clark.net> (in particular <https://importai.substack.com/p/import-ai-337-why-i-am-confused-about> and <https://jack-clark.net/2023/07/05/what-should-the-uks-100-million-foundation-model-taskforce-do/>)) (Clark's personal capacity, but that's OK) --- Anthropic [tweet](https://twitter.com/AnthropicAI/status/1684972354592546816) --- Stuff from Altman's world tour in May–Jun 2023 --- DeepMind: <https://www.theguardian.com/commentisfree/2023/aug/04/ai-companies-regulation-international-inclusive> --- [Moore's Law for Everything](https://moores.samaltman.com) (Altman 2021) and [Sam Altman and Bill Gale on Taxation Solutions for Advanced AI](https://www.governance.ai/post/sam-altman-william-g-gale) (GovAI 2022) --- OpenAI: [Confidence-Building Measures for Artificial Intelligence](https://openai.com/research/confidence-building-measures-for-artificial-intelligence) --- In other labs: Inflection AI: Suleyman: [Tweet](https://twitter.com/mustafasuleymn/status/1677255322539261952) (Jul 2023): 'It's time for meaningful outside scrutiny of the largest AI training runs. The obvious place to start is "Scale & Capabilities Audits" 1./ There are two ways I see this working. Firstly an industry funded consortium that everyone voluntarily signs up to. In some ways this might be quicker and easier route, but the flaws are also obvious. 2./ It would almost immediately be accused of capture, and might be tempted to softball the audit process. More robust would be a new government agency of some kind, with a clear mandate to audit every model above certain scale and capability thresholds. 3./ This would be a big step change, fundamentally at odds with the old skool culture of the tech industry. But it's the right thing to do and [it's] time for a culture shift. We in AI should welcome third party audits. 4./ The critical thing now is to design a sensible system, and agree the benchmarks that will actually offer real oversight, and ensure that oversight is tied to delivering AI that works in the interests of everyone. Let's get started right away.' But note his AI-catastrophe-skepticism elsewhere (citation needed). Also [Tweet](https://twitter.com/mustafasuleymn/status/1677636986423713792) or what it links to. --- In other labs: Microsoft: [How do we best govern AI?](https://blogs.microsoft.com/on-the-issues/2023/05/25/how-do-we-best-govern-ai/) and [Governing AI: A Blueprint for the Future](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW14Gtw) (May 2023) (see especially "licensing regime"). --- People talk to governments privately-- e.g. I should ask Jack Clark if he's willing to share some of what he says privately?
e6b04d3e-bedd-4bf5-97bb-2ab7cbeaf269
StampyAI/alignment-research-dataset/lesswrong
LessWrong
What would a compute monitoring plan look like? [Linkpost] Yonadav Shavit (CS PhD student at Harvard) recently released a paper titled [What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring](https://arxiv.org/abs/2303.11341). The paper describes a compute monitoring regime that could allow governments to monitor training runs and detect deviations from training run regulations. I think it's one of the most detailed public write-ups about compute governance, and I recommend AI governance folks read (or skim) it. A few highlights below (bolding mine).  Abstract: ========= As advanced machine learning systems' capabilities begin to play a significant role in geopolitics and societal order, **it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other's compliance with potential future international agreements on advanced ML development.** This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework's primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners' models, data, and hyperparameters. **The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips.** The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [Jia et al. '21].  Solution overview: ================== In this section, we outline a high-level technical plan, illustrated in Figure 1, for Verifiers to monitor Provers’ ML chips for evidence that a large rule-violating training occurred. The framework revolves around chip inspections: the Verifier will inspect a sufficient random sample of the Prover’s chips (Section 3.2), and confirm that none of these chips contributed to a rule-violating training run. For the Verifier to ascertain compliance from simply inspecting a chip, we will need interventions at three stages: on the chip, at the Prover’s data-center, and in the supply chain. * *On the chip (Section 4)*: When the Verifier gets access to a Prover’s chip, they need to be able to confirm whether or not that chip was involved in a rule-violating training run. Given that rule violation depends only 5 Verifying Rules on Large-Scale NN Training via Compute Monitoring on the code that was run, our solution will necessitate that ML chips logging infrequent traces of their activity, with logging done via hardware-backed firmware. **We suggest that ML chips’ firmware occasionally log a copy of the current state of the chip’s high-bandwidth memory to long-term storage, and in particular, that it logs the shard of the NN’s weights stored in memory. These weight-snapshots can serve as a fingerprint of the NN training that took place on each chip.** * *At the data-center (Section 5):* The Verifier needs a way to interpret the chips’ logs, and determine whether or not they are evidence for a rule-violating training run. **To that end, the Prover, who is training the model, will be required to store a transcript of the training process — including training data, hyperparameters, and intermediate weight checkpoints — for each model they train.** Using protocols similar to “Proof-of-Learning” [23], these training transcripts may serve as provenance for the logged weight-snapshots, which are themselves the result of the same training process. In practice, for each (hash of a) weight-snapshot logged by a chip, the Prover provides the Verifier (the hashed version of) the matching training transcript. **Then the Prover and Verifier jointly and securely verify that, with high probability, retraining using the training transcript would have indeed resulted in the logged weight-snapshot (and that no other valid training transcript could have resulted in that snapshot).** Finally, now that the Verifier knows an approximate training transcript of the training run that had been executed on that chip at that time, they can examine properties of the training transcript to confirm that the Prover has complied with the agreed upon rules. * *At the supply chain (Section 6):* The Verifier needs to know which ML chips the Prover owns, so that the Verifier can randomly inspect a representative sample of those chips, to confirm their ownership and that their logging mechanisms are reporting correctly. **Without this chip-ownership verification step, a Prover might covertly acquire a large quantity of chips and use them for training without ever notifying the Verifier.** Keeping track of chip-ownership is viable because the cutting-edge data-center chip supply chain is highly concentrated, meaning that chips originate from a few initial chokepoints and can be monitored thereafter. These steps, put together, enable a chain of guarantees: * When any organization wants to train a large rule-violating ML model, they must do so **using chips that the Verifier is aware of.** * **These chips will occasionally log weight-snapshots**. Each time a chip logs a weight-snapshot, the Prover must report the log to the Verifier, along with (hashes of) training transcripts that establish the provenance of that weight-snapshot. * If the Prover tries to hide the logs from every chip involved in the training run, the Verifier will eventually find out, because it will sample and physically inspect at least one of those chips with high probability. * Also, the hashed training transcripts that the Prover provides along with the logged weight-snapshot need to be authentic, reflecting the training run that was run on the chip. If they are not authentic, they will be caught by the Verifier’s transcript verification procedure. * If the training transcript is authentic, and the run violated the rules, then the Verifier can tell, and will catch the Prover. Thus, so long as the Prover complies with the Verifier’s steps, the Verifier will detect the Prover’s rule-violation with high probability. **Just as in financial audits, a Prover’s refusal to comply with the verification steps would itself represent an indication of guilt.**
5652dac7-af2e-42d5-b2b7-ac1eec11623b
trentmkelly/LessWrong-43k
LessWrong
Meetup : Tel Aviv Meetup: Social & Board Games Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games WHEN: 07 July 2015 07:00:00PM (+0300) WHERE: 98 Yigal Alon Street, Tel Aviv Next Tuesday at 19:00 we're going to have a social meetup! It's going to be a game night full of people talking about physics, friendly AI, and how to effectively save the world. Please bring any games you'd like to play. The Israeli LessWrong community meets every two weeks, alternating between lectures and social/gaming nights. Meet at Google, Electra Tower, 98 Yigal Alon Street, Tel Aviv: The 29th floor (not the Google Campus floor). We'll then move to a room. Contact: If you can't find us, call Anatoly, who is graciously hosting us, at 054-245-1060; or Joshua at 054-569-1165. Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games
0cc579d5-101e-4e90-ba03-3646af9c23b0
trentmkelly/LessWrong-43k
LessWrong
Counting arguments provide no evidence for AI doom Crossposted from the AI Optimists blog. AI doom scenarios often suppose that future AIs will engage in scheming— planning to escape, gain power, and pursue ulterior motives, while deceiving us into thinking they are aligned with our interests. The worry is that if a schemer escapes, it may seek world domination to ensure humans do not interfere with its plans, whatever they may be. In this essay, we debunk the counting argument— a central reason to think AIs might become schemers, according to a recent report by AI safety researcher Joe Carlsmith.[1] It’s premised on the idea that schemers can have “a wide variety of goals,” while the motivations of a non-schemer must be benign by definition. Since there are “more” possible schemers than non-schemers, the argument goes, we should expect training to produce schemers most of the time. In Carlsmith’s words: > 1. The non-schemer model classes, here, require fairly specific goals in order to get high reward. > 2. By contrast, the schemer model class is compatible with a very wide range of (beyond episode) goals, while still getting high reward… > 3. In this sense, there are “more” schemers that get high reward than there are non-schemers that do so. > 4. So, other things equal, we should expect SGD to select a schemer. > > — Scheming AIs, page 17 We begin our critique by presenting a structurally identical counting argument for the obviously false conclusion that neural networks should always memorize their training data, while failing to generalize to unseen data. Since the premises of this parody argument are actually stronger than those of the original counting argument, this shows that counting arguments are generally unsound in this domain. We then diagnose the problem with both counting arguments: they rest on an incorrect application of the principle of indifference, which says that we should assign equal probability to each possible outcome of a random process. The indifference principle is controversia
a14766db-2a60-4cf2-a045-ba39ac89b08e
trentmkelly/LessWrong-43k
LessWrong
A Dialogue on Deceptive Alignment Risks I have long been puzzled by the wide differences in the amount of concern established alignment researchers show for deceptive alignment risks. I haven't seen a post that clearly outlines the cruxes: much of the relevant conversation is buried deep within LessWrong comment sections and shortform threads or isn't on the forum at all. Joe Carlsmith's report on scheming gives an excellent overview of the arguments but is 127 pages long. This post is my attempt to give a relatively short overview of the most important arguments on deceptive alignment through a dialogue between Skeptic and Advocate, fictional characters on the two sides of the debate.[1] Definitions and scope Skeptic: "First, let's settle on a shared definition. When I speak of deceptive alignment, I mean the specific failure mode described in Risks from Learned Optimization: an ML model is within a training process and reasons that it should emit particular outputs to avoid the training process modifying it in an undesired way. I do NOT talk about deception in general, where an AI is interacting with humans for any reason and knowingly provides the human with false or misleading information. In other words, Deceptive AI ≠ Deceptively-aligned AI. Second, let's define the bounds of the argument. When I say I assign a probability of <1% to deceptive alignment occurring, I mean a probability of <1% before human obsolescence. I'm not claiming that our future AI successors won't be worrying about deception when they're building literal galaxy brains that are nothing like today's LLMs. Do we agree on both of these points?" Advocate: "Yep, sounds good." Skeptic: "Great. Given this definition and scope, I claim that the likelihood of deceptive alignment occurring is somewhere between 0.1% and 1%. What's your estimate?" Advocate: "I would put it at 40%." How might deceptive alignment arise? Skeptic: "This is a bold claim, given the lack of empirical evidence we have seen for deceptive alignment. Where doe
ab2866fc-0403-4400-b159-0453b00123e9
trentmkelly/LessWrong-43k
LessWrong
Avoid the abbreviation "FLOPs" – use "FLOP" or "FLOP/s" instead Especially in discussions about AI, the abbreviation "FLOPs" is being used for both "floating point operations per second" (a measure of computational power) and "floating point operations" (a measure of total computations, and equivalent to the previous term times seconds). This is ambiguous and confusing. For clarity, I propose people avoid this specific abbreviation and instead use the alternatives of "FLOP" (for floating point operations) and "FLOP/s" (for floating point operations per second).
6f356a10-f889-4001-8cce-a3c3edc69ca0
trentmkelly/LessWrong-43k
LessWrong
Sources of intuitions and data on AGI Much of the difficulty in making progress on AI safety comes from the lack of useful feedback loops. We do not have a superintelligent AI to run tests on and by the time we do, it will probably be too late. This means we have to resort to using proxies. In this post, I will highlight what I think are the four most useful proxies we have access to today: modern machine learning systems, humans, groups, and abstract models. These proxies are not opposed to each other. We need all the data we can get, and the right strategy is to learn from all of them. However each proxy also has its flaws, and will break if used in the wrong way. It is important to pay attention to where each proxy works and where it fails. Modern Machine Learning Systems Pros: With ML systems, one can do experiments quickly to get fast feedback loops with lots of quantitative data. Also, AGI will probably be made using something similar to modern machine learning systems, so insights gained by thinking about ML systems today may be particularly easy to port over. Cons: Modern ML systems are opaque, and it can be very difficult to understand how they are working. This may be a problem we need to overcome eventually anyway, but that doesn't make it any easier to get data from them today. Modern ML systems are also not very smart, and it is reasonable to expect a qualitative difference between their behavior and the behavior of superintelligent systems. This may lead to a false sense of security, especially in solutions that depend implicitly or explicitly on the AI lacking some particular capability, or using a particular strategy. Examples: Concrete Problems in AI Safety and Alignment for Advanced Machine Learning Systems both try to reason about AI Safety by using modern machine learning as their primary source of intuitions. The safety teams at OpenAI and DeepMind try to use this as a primary source of data. (Although it would be a very uncharitable to claim that they do not also draw from th
c5a03f3c-fdc5-4097-9bd4-c320bad6af86
trentmkelly/LessWrong-43k
LessWrong
Evaluating GiveWell as a startup idea based on Paul Graham's philosophy Effective altruism is a growing movement, and a number of organizations (mostly foundations and nonprofits) have been started in the domain. One of the very first of these organizations, and arguably the most successful and influential, has been charity evaluator GiveWell. In this blog post, I examine the early history of GiveWell and see what factors in this early history helped foster its success. My main information source is GiveWell's original business plan (PDF, 86 pages). I'll simply refer to this as the "GiveWell business plan" later in the post and will not link to the source each time. If you're interested in what the GiveWell website looked like at the time, you can browse the website as of early May 2007 here. To provide more context to GiveWell's business plan, I will look at it in light of Paul Graham's pathbreaking article How to Get Startup Ideas. The advice here is targeted at early stage startups. GiveWell doesn't quite fit the "for-profit startup" mold, but GiveWell in its early stages was a nonprofit startup of sorts. Thus, it would be illustrative to see just how closely GiveWell's choices were in line with Paul Graham's advice. There's one obvious way that this analysis is flawed and inconclusive: I do not systematically compare GiveWell with other organizations. There is no "control group" and no possibility of isolating individual aspects that predicted success. I intend to write additional posts later on the origins of other effective altruist organizations, after which a more fruitful comparison can be attempted. I think it's still useful to start with one organization and understand it thoroughly. But keep this limitation in mind before drawing any firm conclusions, or believing that I have drawn firm conclusions. The idea: working on a real problem that one faces at a personal level, is acutely familiar with, is of deep interest to a (small) set of people right now, and could eventually be of interest to many people Graham writes (e
923941ea-49f7-482f-b851-6717452629d0
trentmkelly/LessWrong-43k
LessWrong
Conversational Signposts—How to stop having boring social interactions I went from being a bad conversationalist to a good one after learning about conversational signposts. Before defining the concept that led me to having more engaging and diverse social interactions, I’d like to first showcase an awkward chat I had recently. > [The topic of music came up] > > Her: “Do you play any instruments?” > Me: “Yeah, I’ve been playing piano for 20 years.” > Her: “Hmm, cool.” > > [Awkward silence] > > Me: “So, uhh, do you play any instruments? Or are there any instruments you wish you could play?” > Her: “Nah, not really.” > > [Awkward silence] It’s possible she wasn’t interested in talking to me. But it’s also possible that she didn’t know how to advance the conversation. She could have been aided by using…   Conversational Signposts: distinct bits of information that, when followed, lead to divergent paths in a conversation   From the statement “I’ve been playing piano for 20 years,” I can extract out two unique components: 1. playing piano, and 2. for 20 years These are conversational signposts that represent different directions she could have taken our chat. Her general interest in music could’ve helped guide her to choose which signpost to follow. And if a particular branch of conversation fizzled out, she could’ve always circled back to previous signposts mentioned in the conversation:   Putting my awkward conversation about instruments aside, I want to contrast it with… A successful example. While talking to somebody else last week, we seamlessly followed each other’s conversational signposts (which are highlighted in bold). > Me: “Did you get up to anything Friday night?” > > Him: “Yeah, I went line dancing at a place called Stony’s.” > > Me: [I don’t care about dancing so I opted for a joke.] “Oh cool, does that place double as a weed bar?” > > Him: “Huh?” > > Me: “Cuz, you know, Stony’s.” > > Him: [Groaning] > > Me: [Jokingly] “I’ll see myself out” [and fake walked away]. “No, but seriously, is the reason
96fa35ea-ecf9-4d3e-a5e3-ff661cadc00e
trentmkelly/LessWrong-43k
LessWrong
Six economics misconceptions of mine which I've resolved over the last few years Here are six cases where I was pretty confident in my understanding of the microeconomics of something, but then later found out I was missing an important consideration. Thanks to Richard Ngo and Tristan Hume for helpful comments. Here’s the list of mistakes: * I thought divesting from a company had no effect on the company. * I thought that the prices on a prediction market converged to the probabilities of the underlying event. * I thought that I shouldn’t expect to be able to make better investment decisions than buying index funds. * I had a bad understanding of externalities, which was improved by learning about Coase’s theorem. * I didn’t realize that regulations like minimum wages are analogous to taxes in that they disincentivize work. * I misunderstood the economics of price controls. In each, I’m not talking about empirical situations at all—I’m just saying that I had a theoretical analysis which I think turned out to be wrong. It’s possible that in many real situations, the additional considerations I’ve learned about don’t actually affect the outcome very much. But it was still an error to not know that those considerations were potentially relevant. 1. Divestment I used to believe that personally divesting in a company didn’t affect its share price, and therefore had no impact on the company. I guess my reasoning here was something like “If the share is worth $10 and you sell it, someone else will just buy it for $10, so the price won’t change”. I was treating shares as if they were worth some fixed amount of money. The simplest explanation for why you can’t just model shares as being worth fixed amounts of money is that people are risk averse, and so the tenth Google share you buy is worth less to you than the first; and so as the price decreases, it becomes more worthwhile to take a bigger risk on the company. As a result, divestment reduces the price of shares, in the same way that selling anything else reduces its price. In the speci
d9116950-0830-4ee9-b4fe-bb0d467a8f74
trentmkelly/LessWrong-43k
LessWrong
Cutting edge technology Original post: http://bearlamp.com.au/cutting-edge-technology/ ---------------------------------------- When the microscope was invented, in a very short period of time we discovered the cell and the concept of microbiology.  That one invention allowed us to open up entire fields of biology and medicine.  Suddenly we could see the microbes!  We could see the activity that had been going on under our noses for so long. when we started to improve our ability to refined pure materials we could finally make furnace bricks with specific composition.  Specific compositions could then be used to make bricks that were able to reach higher temperatures without breaking.  Higher temperatures meant better refining of materials.  Better refining meant higher quality bricks, and so on until we now have some very pure technological processes around making materials.  But it's something we didn't have before the prior technology on the skill tree.   Before we had refrigeration and food packaging, it was difficult to get your fresh food to survive to your home.  Now with production lines it's very simple.  For all his decadence Caesar probably would have had trouble ordering a cheeseburger for $2 and having it ready in under 5 minutes.  We've come a long way since Caesar.  We've built a lot of things that help us stand on the shoulders of those who came before us. ---------------------------------------- Technology enables further progress.  That seems obvious.  But did that seem obvious before looking down the microscope?  Could we have predicted what bricks we could have made with purely refined materials?  Could Caesar have envisioned every citizen in his kingdom watching TV for relatively little cost to those people?  It would have been hard to forsee these things back then. With the idea that technology is enabling future growth in mind - I bring the question, "What technology is currently under-utilised?"  Would you be able to spot it when it happens?  Touch screen rev
803b55eb-d2f1-4ec2-8691-616f59425750
trentmkelly/LessWrong-43k
LessWrong
Meetup : Vancouver Boredom vs Scope Insensitivity, and life-debugging Discussion article for the meetup : Vancouver Boredom vs Scope Insensitivity, and life-debugging WHEN: 16 February 2013 03:00:00PM (-0800) WHERE: 2505 W broadway, vancouver Meet at Benny's Bagels on west broadway at 15:00 on saturday. We are going to discuss Boredom vs Scope Insensitivity and related issues with utility curves. To complement such a theoretical topic, we may also discuss the rationality failures in our lives that we'd like to get better at. It should be thoroughly depressing and hopefully useful. As usual, see us on our mailing list. Discussion article for the meetup : Vancouver Boredom vs Scope Insensitivity, and life-debugging
7765104b-ddd1-4cf4-a0e5-21cd7eb0e0ca
trentmkelly/LessWrong-43k
LessWrong
Directed Babbling At a rationality workshop I ran an activity called “A Thing.” Not only because I didn’t know what to call it, but because I didn’t know what to expect. In retrospect, I've decided to christen it "Directed Babbling." It was borne out of a naive hope that if two individuals trusted each other enough, they’d be able to lower their social inhibitions enough to have a conversation with zero brain-to-mouth filter. I thought this would lead to great conversations, and perhaps act as a pseudo-therapeutic tool to resolve disputes, disagreements over emotionally charged topics, and the like. However, it turns out this isn’t necessarily the best use case for a conversation where you simply say the first thing that comes into your head. As with any writing trying to describe social dynamics, this may be somewhat inscrutable. However, I will try my best to explain exactly what I claim to be a useful conversational tool, for use-cases  from “solving hard technical problems with a partner”, to “diving off the insanity deep end”.  Background Alice and Bob are having a conversation. Alice says X, which Bob responds to with Y,  in the context of the conversation (the previous things that Alice and Bob have said to each other) and the context of the world (Bob’s priors). Typically, Bob’s System 1 formulates Y and Bob’s System 2 “edits” it (for lack of a better term) - in most cases, the final output has more to do with System 1 than System 2. However, most of the time in discussion is spent with these System 2 “add-ons” - formulating ideas into sentences, making sure that the vocabulary is appropriate for the conversation, etc.  Hypothesis: if you intentionally remove the System 2 filters from the conversation between Alice and Bob, then you get a rapid feedback loop where the System 1 responses are simultaneously much faster and shorter than the original, which lets the conversation have a much higher idea density.  Setup We paired participants and asked them to come up with a
c76362ca-ec7c-44f5-8251-b23536af5c98
trentmkelly/LessWrong-43k
LessWrong
Nick Land: Orthogonality Editor's note  Due to the interest aroused by @jessicata's posts on the topic, Book review: Xenosystems and The Obliqueness Thesis, I thought I'd share a compendium of relevant Xenosystem posts I have put together. If you, like me, have a vendetta against trees, a tastefully typeset LaTeχ version is available at this link. If your bloodlust extends even further, I strongly recommend the wonderfully edited and comprehensive collection recently published by Passage Press. I have tried to bridge the aesthetic divide between the Deleuze-tinged prose of vintage Land and the drier, more direct expositions popular around these parts by selecting and arranging pieces so that no references are needed but those any LW-rationalist is expected to have committed to memory by the time of their first Lighthaven cuddle puddle (Orthogonality, Three Oracle designs), and I've purged the texts of the more obscure 2016 NRx and /acc inside baseball; test readers confirmed that the primer stands on its own two feet. The first extract, Hell-Baked, is not strictly about orthogonality, but I have decided to include it as it presents a concise and straightforward introduction to the cosmic darwinism underpinning the main thesis. Xenosystems: Orthogonality IS Hell-Baked > Neoreaction, through strategic indifference, steps over modes of condemnation designed to block certain paths of thought. Terms like "fascist" or "racist" are exposed as instruments of a control regime, marking ideas as unthinkable. These words invoke the sacred in its prohibitive sense. Is the Dark Enlightenment actually fascist? Not at all. It's probably the least fascistic strain of political thought today, though this requires understanding what fascism really is, which the word itself now obscures. Is it racist? Perhaps. The term is so malleable that it's hard to say with clarity. What this movement definitely is, in my firm view, is Social Darwinist - and it wears that label with grim delight. If "Social Dar
6592e322-7825-4d4d-8fa7-298682646495
trentmkelly/LessWrong-43k
LessWrong
Open thread, 18-24 March 2014 If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
889f49a8-d968-40dd-84ec-6f780ebdca4d
trentmkelly/LessWrong-43k
LessWrong
I believe some AI doomers are overconfident Just up front: I have no qualifications on this so adjust accordingly.  I teach AP Calc/AP Stats if you want to know my deal.  Putting this down because half the time I'm reading a post I'm thinking to myself "I wish I knew what this person's deal is" :) People who believe there is a >50% possibility of doom in the next 50 years or so strike me as overconfident.  Just to say the general public is obviously way underestimating the risk (0% too low lol), but I believe many people on this site are overestimating the risk. The complexity of the system is just so high.  How can we predict what a superintelligence that hasn't even been created yet will behave?  I understand that it's reasonable to assume it will want to accumulate resources, eliminate threats (us lol) etc., but how can anyone be for instance 90%+ sure that it will end with us dead? There are so many links of reasoning that all have at least a small chance of going awry.  Just spit-balling off the top of my head--  I'm sure all of these can be refuted, but like are you 100% sure?  Just to say these specifics are not the point of my argument: I'm just arguing that there are a million things that could in theory go wrong, and even if each is unlikely, it's kind of a swiss-cheese defense against doom: 1. Maybe when a certain level of intelligence is reached, consciousness comes online, and that affects the behavior of the system in an unpredictable way. 2. Maybe alignment works! 3. Maybe the system is unbelievably super-intelligent, but for some reason true social reasoning is basically impossible for LLMs and the like, and we need to go down a different, distant path before that becomes possible. We can still easily trick it! 4. Maybe superintelligence is subject to rot.  Many complex systems just kind of decay, and maybe for some reason this intelligence is unable to constantly update its code and maintain itself perfectly. 5. Maybe it's actually worse than it seems, but fledgling AIs go crazy in 2
a6104fd4-b38d-459a-bfb6-fe8ca89d6b80
trentmkelly/LessWrong-43k
LessWrong
No, really, can "dead" time be salvaged? About a month ago, /u/batislu on /r/SlateStarCodex posted the question "How do you spend your "dead" time productively?". I read this thread, and found myself relieved (because of the admonitions to chill out), but also frustrated (because of the lack of real answers to the question). With the urgency entailed by extinction risks etc., "just chilling" during dead time can (for many of us) feel undoable. Or, at least, undoable some of the time. Assume, for many of us, our day job / school does little to directly help, at the highest levels, with the kinds of important problems discussed here. (This is a good time to remind everyone that these opinions are both hypothetical, and solely my own (not my employer's).) Then the questions become: * What, if anything, can be done in the tired "between-time" after work? * Can it help with any of the following?: * Directly helping work on AI safety / global risks. * Upskilling quickly enough to contribute substantially to the previous thing. * Improving one's health/intelligence/financial independence enough to be in a better position (in the near term, like less than a year) to help with the first thing. Some answers of the format and specificity being looked for here: * "Join this org's Discord and critique their ideas, if you find argument/feedback a relaxing/low-stress activity." * "Do 1 small unit of this easily-spit-uppable low-chance-of-getting-stuck MOOC per day." * "Find a type of exercise, like X Y or Z, that you find fun, and do that once per day." * "Here's a list of activities many people I know find productive and relaxing, see if any apply to you: ..." Note that the goal is not to replace all of one's dead time with something productive (unless it's possible to do without crashing and burning lol). The goal is to keep moving forward at things that would realistically help solve important problems. (Then our guilt/anxiety will be assuaged enough to actually enjoy/recharge the rest of our de
5e26c16d-de13-4382-8ed7-95d08c083755
trentmkelly/LessWrong-43k
LessWrong
PSA for academics in Ukraine (or anywhere else) who want to come to the United Kingdom If you are an academic with a PhD, or someone who has achieved some level of recognition in the arts or digital technology, and you would like to come to the United Kingdom, the UK's "Global Talent" visa is worth taking a look at.  Although the website makes it sound as if it is intended for the Stephen Hawkings of the world, I know multiple academics who did not think of themselves as "international leaders in their field" but managed to qualify nevertheless. If you are an academic-- assuming you do not already have an eligible UK job offer, individual fellowship, UKRI research grant, or eligible award--you will have to go through a peer review process, which I assume is the typical route by which academics get in. For the digital technologies route, this blog claims that the success rate is around 50% (though of course, that's out of the people who submit applications, and the criteria are stringent enough that there's a certain amount of self-selection going on).
e19296c9-ce36-4c0c-98b9-91bd64b4e5e0
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Cooperative Oracles: Introduction This is the first in a series of posts introducing a new tool called a Cooperative Oracle. All of these posts are joint work Sam Eisenstat, Tsvi Benson-Tilsen, and Nisan Stiennon. Here is my plan for posts in this sequence. I will update this as I go. 1. [Introduction](https://agentfoundations.org/item?id=1468) 2. [Nonexploited Bargaining](https://agentfoundations.org/item?id=1469) 3. [Stratified Pareto Optima and Almost Stratified Pareto Optima](https://agentfoundations.org/item?id=1508) 4. Definition and Existence Proof 5. Alternate Notions of Dependency --- In this post, I will give a sketchy advertisement of what is to come. Cooperative oracles are a refinement on reflective oracles. We consider the set of Turing machines with access to a reflective oracle, and we think of each Turing machine as having a utility function (perhaps written in a comment in the source code). This utility function is itself computable using the reflective oracle. Since the definition of a reflective oracle use a fixed point procedure that may have had multiple fixed points, there are actually many different reflective oracles. However, we will take something like a Pareto optimum in the class of reflective oracles. For example if two players in a prisoner's dilemma use a reflective oracles to cooperate with exactly the probability the other player cooperates, there is a continuum of fixed points in which the player cooperate with the same probability. We want to take the fixed point in which the both cooperate. Unfortunately, we do not want to just take a standard Pareto Optimum, since we are working with the class of all Turing Machines, and for every machine with a utility function, there is another with the negation of that utility function, so all points will be Optima. For this, we will use a stratified version of Pareto optima. If a third party wants the two above players to defect against each other, we will still choose mutual cooperation. Since the players only reference each other, and not the third party, the third party is not considered in the notion of optimality for the output of the Oracle on the prisoners' dilemma game. We will be able to build a cooperative oracle, which is a reflective oracle that (roughly) ensures that all of the outputs of the computations are Pareto optimal for all of the utility function of the machines involved it that computation. This Pareto optimality is only used to choose between multiple fixed points. We will still be restricted to have the computation follow its source code, and have the reflective oracle give honest data about the output of the computations. Using cooperative oracles, we will be able to set up a system similar to modal combat, but with reflective oracles. We will also be able to study bargaining. We will formalize the phenomenon described in the Eliezer Yudkowsky post [Cooperating with agents with different ideas of fairness, while resisting exploitation](http://lesswrong.com/lw/inh/cooperating_with_agents_with_different_ideas_of/), and implement this style of bargaining with reflective oracles. This is interesting because it reduces the problem of finding a fair way to split gains from trade from a group endeavor (like in standard cooperative game theory) to an individual endeavor, where each player specifies what they consider to be a fair outcome for them, and the reflective oracle takes care of the rest. The problem of figuring out how to define what is fair to you is still wide open, however. The notion of cooperative oracles presented will be use a clunky syntactic notion of when one computation depends on another. If an algorithm calls the reflective oracle on the output of another, we say the first algorithm depends on the second, and we take the transitive closure of this property. There are many alternatives to this, some involving looking at the actual function the algorithms compute and checking how these functions depend on other values. Specifying better notions of computations depending on each other is a hard problem, and better solutions to this may yield better definitions of cooperative oracles.
526a3910-1edf-47aa-8de3-9498c504f7e6
trentmkelly/LessWrong-43k
LessWrong
Matrix and Inspirational religious fiction Preface This post is entirely intended to be inspirational and entertaining. Which though is no excuse for the logical flaws in it. None of this is particularly plausible nor am I trying to support religion in some weird way, anyway I hope anyone doesn't find this offensive, but I don't think this is the most sensitive website around religious subjects anyway, so.   Religion Sci-fi in style of Dr. Emmet Brown Suppose that mankind succesfully produces a Friendly AI in the future and also adopts some ethical doctrine that cares about people, goes on making technological advancements and prolonging the lives and subjective experiences of humans, or something else along those lines. Then also suppose that humans learn how to travel back in time. It would present an ethical dilemma of what do about all those lost beings that died long ago. Considering there had been a finite number of such cases, it would be plausible to consume a finite amount of resources to bend the future of humanity backwards in time, or travel back in time, to fetch all those people just before they perish and allow them to join the eudaimonic future society of humans. Now consider the possibility that the vast majority of futures for mankind do not contain these outcomes where a Friendly Ai is succesfully established where people care about people and have not succesfully established any flexible, sophisticated or complex morality at all. And in those futures where humanity achieved all those things, it could be seen as a problem, that some futures had consisted of such tragic waste. So we decide to alter the past futures of humanity, and travel backwards in time - yet again, and establish religion, appear on the mountain of Sinai, give out the 10 commandments, and speak of this being superior to humans that takes care of everybody. Which would be the friendly AI, a bostromian singleton, or similar. However they would not do this just to steer the possible future outcomes of humanity away fr
692f603a-54a8-403c-989d-1ce22a64bc36
StampyAI/alignment-research-dataset/special_docs
Other
Is state-dependent valuation more adaptive than simpler rules?. Is state-dependent valuation more adaptive than simpler rules? ============================================================== Abstract -------- [McNamara et al. (2012)](#bib0030) claim to provide an explanation of certain systematic deviations from rational behavior using a mechanism that could arise through natural selection. We provide an arguably much simpler mechanism in terms of computational limitations, that performs better in the environment described by [McNamara et al. (2012)](#bib0030). To argue convincingly that animals’ use of state-dependent valuation is adaptive and is likely to be selected for by natural selection, one must argue that, in some sense, it is a better approach than the simple strategies that we propose. Introduction ------------ Although much animal behavior can be understood as rational, in the sense of making a best response in all situations, some systematic deviations from rationality have been observed. For example, Marsh et al. (2004) presented starlings with two potential food sources, one which had provided food during “tough times”, when the birds had been kept at low weight, while other had provided food during “good times”, when the birds were well fed. They showed that the starlings preferred the food source that had fed them during the tough times, even when that source had a longer delay to food than the other source. Similar behavior was also observed in fish and desert locusts (Aw et al., 2009, Pompilio et al., 2006). McNamara et al. (2012) claim to provide an explanation of this behavior using a mechanism that could arise through natural selection. They provide an abstract model of the bird-feeding setting where a decision maker can choose either a “risky” action or a “safe” action. They also provide a mechanism that takes internal state into account and can lead to good results (where, in the example above, the internal state could include the fitness of each source). However, as we observe, for the particular parameters used in their model, there is a *much* better (in the sense of getting a higher survival probability) and much simpler approach than their mechanism that does not take the internal state into account: simply playing safe all the time. It is hard to see how the mechanism proposed by McNamara et al. could arise in the model that they use by natural selection; the simpler mechanism would almost surely arise instead. The fact that always playing safe does well depends on the particular parameter settings used by McNamara et al. Playing safe would not be a good idea for other parameter settings. However, we show that a simple 2-state automaton that more or less plays according to what it last got also does quite well. It does significantly better than the McNamara et al. mechanism, and does well in a wide variety of settings. Although our automaton also takes internal state into account (the internal state keeps track of the payoff at the last step), it does so in a minimal way, which does not suffice to explain the irrational behavior observed. It seems to us that to argue convincingly that the type of mechanism proposed by McNamara et al. is adaptive and is likely to be selected for by natural selection, and thus explains animals’ use of state-dependent valuation, then one must argue that, in some sense, it is a better approach than the simple strategies that we propose. Now it could be that the simple strategies we consider do not work so well in a somewhat more complicated setting, and in that setting, taking the McNamara et al.'s approach does indeed do better. However, such a setting should be demonstrated; it does not seem easy to do so. In any case, at a minimum, these observations suggest that McNamara et al.'s explanation for the use of state-dependent strategies is incomplete. We should add that we are very sympathetic to the general approach taken by McNamara et al., although our motivation has come more from the work of Wilson (2015) and Halpern et al., 2012, Halpern et al., 2014, which tries to explain seemingly irrational behavior, this time on the part of humans, in an appropriate model. That work assumes that people are resource-bounded, which is captured by modeling people as finite-state automata, and argues that an optimal (or close to optimal) finite-state automaton will exhibit some of the “irrational” behavior that we observe in people. (The 2-state automaton that we mentioned above is in fact a special case of a more general family of automata considered in Halpern et al. (2012); see Section 3.3.) We believe that taking computational limitations seriously might be a useful approach in understanding animal behavior, and may explain at least some apparently irrational behavior. The rest of this paper is organized as follows. In Section 2, we review the model used by McNamara et al. (2012) and compare it to that of Halpern et al. (2012). In Section 3, we describe four strategies that an agent can use in the McNamara et al. model, under the assumption that the agent knows which action is the risky action and which is the safe action. One is the strategy used by McNamara et al.; another is a simplification of the strategy that we considered in our work; the remaining two are baseline strategies. In Section 4, we evaluate the strategies under various settings of the model parameters. In Section 5, we consider what happens if the agent does not know which action is risky and which is safe and, more generally, the issue of learning. We conclude in Section 6. Section snippets ---------------- The model --------- McNamara et al. (2012) assume that agents live at most one year, and that each year is divided into two periods, winter and summer. Animals can starve to death during a winter if they do not find enough food. If an agent survives the winter, then it reproduces over the summer, and reproductive success is independent of the winter behavior. A “winter” is a series of *T* discrete time steps. At any given time, the environment is in one of two states: *G* (good) or *S* (sparse); the state of the Four strategies --------------- In this section, we describe four strategies that an agent can use in the McNamara et al. model. We will be interested in the probability that an agent survives a “winter” period using each of these strategies. Note that the higher this probability is, the greater the probability that this strategy will emerge as the dominant strategy in an evolutionary process. Evaluating the strategies ------------------------- In this section we evaluate the four strategies discussed in the previous section under various settings of the model parameters. We calculate the survival probability of an agent using the strategy over a winter of length *T* = 500 steps by simulating 100,000 winters and looking at the ratio of runs in which the agent survived. We initialize the environment to the sparse state and the resource level to the maximum of 10. Discussion ---------- While we show that the automaton strategy is better than the value strategy in many scenarios, as we discussed before, the value strategy (or, more generally, state-dependent strategies), seem to be what animals actually use in the scenarios studied in previous papers. We now discuss some possible explanations for this. Conclusion ---------- Our results show that some very simple strategies seem to consistently outperform the value strategy. This gap grows as the task of surviving becomes more challenging (either because “winter” is longer or the rewards are not as high). This at least suggests that the model considered by McNamara et al. (2012) is not sufficient to explain the evolution of the value strategy in animals. McNamara et al. claim that “[w]hen an animal lacks knowledge of the environment it faces, it may be adaptive for Acknowledgements ---------------- This work was supported in part by NSF grants IIS-0911036 and CCF-1214844, by ARO grant W911NF-14-1-0017, by Simons Foundation grant #315783, and by the Multidisciplinary University Research Initiative (MURI) program administered by the AFOSR under grant FA9550-12-1-0040. Thanks to Arnon Lotem, Alex Kacelnik, and Pete Trimmer for their useful comments. Most of this work was carried out while the second author was at Harvard’s Center for Research on Computation and Society; Harvard's support is References (8) -------------- * J.M. Aw *et al.*### [State-dependent valuation learning in fish: banded tetras prefer stimuli associated with greater past deprivation](/science/article/pii/S0376635708002131) ### Behav. Process. (2009) * I. Erev *et al.*### A choice prediction competition for market entry games: an introduction ### Games (2010) * J.Y. Halpern *et al.*### I’m doing as well as I can: modeling people as rational finite automata ### Proc. Twenty-Sixth National Conference on Artificial Intelligence (AAAI’12) (2012) * J.Y. Halpern *et al.*### Decision theory with resource-bounded agents ### Top. Cogn. Sci. (2014) There are more references available in the full text version of this article.Cited by (0) ------------ Recommended articles (6) ------------------------ * Research article### [Measuring Olfactory Processes in *Mus musculus*](/science/article/pii/S0376635717301183) Behavioural Processes, Volume 155, 2018, pp. 19-25Show abstractThis paper briefly reviews the literature that describes olfactory acuity and odour discrimination learning. The results of current studies that examined the role of the neurotransmitters noradrenalin and acetylcholine in odour discrimination learning are discussed as are those that investigated pattern recognition and models of human disease. The methodology associated with such work is also described and its role in creating disparate results assessed. Recommendations for increasing the reliability and validity of experiments so as to further our understanding of olfactory processes in both healthy mice and those modelling human disease are made throughout the paper. * Research article### [Tuberculosis detection by pouched rats: Opportunities for reinforcement under low-prevalence conditions](/science/article/pii/S0376635717300347) Behavioural Processes, Volume 155, 2018, pp. 2-7Show abstractGiant African pouched rats (*Cricetomys ansorgei*) have been employed successfully in two operational tuberculosis-detection projects in which they sniff sputum samples from symptomatic individuals who have visited tuberculosis clinics. The prevalence of pulmonary tuberculosis in this population is high, approximately 20% in the regions where the rats have been used. If the rats are to be used to screen individuals from lower-prevalence populations, their performance under such conditions must first be evaluated. In this study, the prevalence of tuberculosis-positive samples presented to eight pouched rats was reduced to approximately 5%, and the percentage of known-positive samples included as opportunities for reinforcement was varied in sequence from 10 to 8, 6, 4, 2, 4, and 2. Liquid food reinforcers were delivered for identification responses to known-positive samples and at no other time. The rats’ accuracy was clinically and statistically significantly lower at 2% than at the other values. These results indicate that the rats can perform well in low-prevalence scenarios but, if they are used under the conditions of the present study, at least 4% of the samples presented to them must be opportunities for reinforcement. * Research article### [Grouping promotes risk-taking in unfamiliar settings](/science/article/pii/S0376635717302954) Behavioural Processes, Volume 148, 2018, pp. 41-45Show abstractActing collectively in a group provides risk-reducing benefits. Yet individuals differ in how they take risks, with some being more willing than others to approach dangerous or unfamiliar settings. Therefore, individuals may need to adjust their behaviour when in groups, either as a result of perceiving greater safety or to coordinate collective responses, the latter of which may rely on within-group dynamics biased by group composition. In zebrafish we explored how these aspects of grouping affect risk-taking behaviour by comparing solitary to group conditions and testing the ability of group-member solitary responses to predict collective responses. We focused on approach-latency towards a novel object and an unusual food to test this, for shoals of five fish. There was no indication that collective latencies are predicted by how each fish responded when alone in terms of the extremes, the variance or the mean of group-member latency towards the unusual food and the novel-object. However, fish were overall faster and less variable in their approach when shoaling. This indicates lower risk aversion by individuals in groups, presumably as a result of group safety. An interesting consequence of the overall low risk-aversion in shoals is that more risk-aversive fish adjust their behaviour more than less risk averse fish. * Research article### [An investigation of the probability of reciprocation in a risk-reduction model of sharing](/science/article/pii/S0376635717304552) Behavioural Processes, Volume 157, 2018, pp. 583-589Show abstractA laboratory study investigated whether reductions in the probability of reciprocation would influence sharing in situations of shortfall risk. Choice in twelve adults was evaluated against the predictions of a risk-reduction model of sharing derived from a risk-sensitive foraging theory (the energy-budget rule). Participants responded on a computer task to earn hypothetical money which could be later exchanged for real money. If participants selected the sharing option, their earnings were pooled and split with a (fictitious) partner. To model shortfall risk, the task was arranged so that participants lost their accumulated earnings if it fell below an earnings requirement. Across conditions the probability that the partner would contribute to the pool was .95, .65, and 0. Choosing the sharing option was optimal under the .95 and .65, but not 0 condition. Although levels of preference for the sharing option were below optimal, participants chose it significantly more in the .95 and .65 conditions than in the 0 condition. Sharing was lower in the .65 condition than .95 condition but the difference was not statically significant. The results are consistent with prior cooperation research and demonstrate that under shortfall risk, reductions in the probability of reciprocation by partners may decrease sharing. * Research article### [Effects of age on the courtship, copulation, and fecundity of *Pardosa pseudoannulata* (Araneae: Lycosidae)](/science/article/pii/S0376635717302474) Behavioural Processes, Volume 146, 2018, pp. 10-15Show abstractAccording to sexual selection theory, age affects the preference of mate choice, and this preference ultimately influences the fecundity of the female. *Pardosa pseudoannulata* (Araneae: Lycosidae) is a valued predator in many cropping systems. By determining oviposition rate, egg hatching rate, and also the number and carapace width of the 2nd instar spiderlings of the F1 generation, we explored the effects of age on fecundity of the female spider. There were no significant effects of age on courtship duration, sexual cannibalism rate, mating rate, oviposition rate, egg hatching rate, or the number and carapace width of 2nd instar spiderings of *P. pseudoannulata.* However, age had a significant effect on courtship latency, courtship intensity, and mating duration of the spider. Courtship latency decreased significantly with an increase in the age of the male, and courtship intensity of the low-age male increased with increasing female age. Increasing age of male and female spiders was associated with significantly prolonged mating duration. The results indicated that low-age male spiders were more inclined to mate with high-age females, and age had no significant effect on sexual cannibalism rate or the fecundity of the female. * Research article### [The influence of lameness and individuality on movement patterns in sheep](/science/article/pii/S0376635717302449) Behavioural Processes, Volume 151, 2018, pp. 34-38Show abstractWe investigated how individuality and lameness altered social organisation by assessing food-directed movement patterns in sheep. One hundred and ninety-six mature Merino ewes were walked in 16 different runs around a 1.1 km track following a food source. Flock position and lameness were measured and temperament was assessed using an Isolation Box Test. The mean value for the correlations of position between a run and the run preceding it was *r* = 0.55 ± SEM 0.03. All correlations between runs were positive (*r* = 0.08–0.76) and all but two were statistically significant (*P* < 0.05). The weakest and least statistically significant correlations were for run 14: where all 16 runs were conducted approximately 3 times a week, except with an interval of 20 weeks between runs 13 and 14. Additionally, there were differences in overall positions for a lame versus a non-lame individual (all *P* < 0.05) with lame sheep being further back in position when compared to their non-lame mean positions. These results indicate the movement patterns, as measured by flock position during a food-directed forced movement order are relatively stable provided tests occur frequently, possibly on a bi-weekly basis. However, further work will be required to better account for individual animal variation. [View full text](/science/article/pii/S0376635717302048)
43d6b07e-2fb2-4170-b5b8-d28b5b5d38bc
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Building a transformer from scratch - AI safety up-skilling challenge It is not always obvious whether your skills are sufficiently good to work for one of the various AI safety and alignment organizations. There are many options to calibrate and improve your skills including just applying to an org or talking with other people within the alignment community.  One additional option is to test your skills by working on projects that are closely related to or a building block of the work being done in alignment orgs. By now, there are multiple curricula out there, e.g. the one by [Jacob Hilton](https://github.com/jacobhilton/deep_learning_curriculum) or the one by [Gabriel Mukobi](https://forum.effectivealtruism.org/posts/S7dhJR5TDwPb5jypG/levelling-up-in-ai-safety-research-engineering).  One core building block of these curricula is to understand transformers in detail and a common recommendation is to check if you can build one from scratch. Thus, my girlfriend and I have recently set ourselves the challenge to build various transformers from scratch in PyTorch. We think this was a useful exercise and want to present the challenge in more detail and share some tips and tricks. You can find our code [here](https://github.com/mariushobbhahn/transformers_from_scratch). Building a transformer from scratch =================================== The following is a suggestion on how to build a transformer from scratch and train it. There are, of course, many details we omit but I think it covers the most important basics.  Goals ----- From the ground up we want to  * Build the attention mechanism * Build a single-head attention mechanism * Build a multi-head attention mechanism * Build an attention block * Build one or multiple of a text classification transformer, BERT or GPT. The quality of the final model doesn’t have to be great, just clearly better than random. * Train the model on a small dataset. + We used the [polarity dataset](https://www.kaggle.com/datasets/nltkdata/sentence-polarity) for binary text sentiment classification. + We used the [AG\_NEWS](https://pytorch.org/text/stable/datasets.html) dataset (PyTorch built-in) for BERT and GPT. * Test that the model actually learned something + We looked at the first batch of the test data to see if the model predicted something plausible. + We compared the test loss of a random network with the test loss of the trained network to see if our model is better. Bonus goals * Visualize one attention head * Visualize how multiple attention heads attend to the words of an arbitrary sentence * Reproduce the grokking phenomenon (see e.g. [Neel’s and Tom’s piece](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking)). * Answer some of the questions in [Jacob Hilton's post](https://github.com/jacobhilton/deep_learning_curriculum/blob/master/1-Transformers.md). Soft rules ---------- For this calibration challenge, we used the following rules. Note, that these are “soft rules” and nobody is going to enforce them but it’s in your interest to make some rules before you start.  We were * **allowed** to read papers such as [Attention is all you need](https://arxiv.org/abs/1706.03762) or the [GPT-3 paper](https://arxiv.org/abs/2005.14165). * **allowed** to read tutorials on attention such as [The illustrated transformer](https://jalammar.github.io/illustrated-transformer/) (as long as they don’t contain code snippets). * **allowed** to look at [tutorials](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) to build generic models in PyTorch as long as they don’t contain NLP architectures. * **allowed** to watch videos such as the ones from [Yannic Kilcher on NLP](https://www.youtube.com/watch?v=u1_qMdb0kYU&list=PL1v8zpldgH3pQwRz1FORZdChMaNZaR3pu&ab_channel=YannicKilcher) * **not allowed** to look at the source code of any transformer or attention mechanism before you have implemented it ourselves. In case we struggle a lot, we can take a peek after we tried and failed to implement one building block ourselves. + We found Andrej Karpathy’s [code](https://github.com/karpathy/minGPT) helpful for the GPT implementation. * **allowed** to replace a part with a PyTorch implementation once we have demonstrated that it is equivalent. For example, once we have shown that our attention mechanism produces the same output for the same input as the PyTorch attention mechanism, we can use the PyTorch code block. * **allowed** to use generic PyTorch functions that are not directly related to the task. For example, we don’t have to write the embedding layer, linear layer or layer-norm from scratch. Things to look out for ---------------------- Here are some suggestions on what to look out for during the project * Do I understand the tutorials? Does it feel obvious and simple or hard? * Am I comfortable with PyTorch? Do I understand how batching works, what the different dimensions of all matrices mean and what dimensions the intermediate results have? * Am I comfortable reading the paper or tutorials? Does the math they present feel easy or hard? Is “thinking in vectors and matrices” something that feels completely obvious? * Am I comfortable reading PyTorch code? When you compare your code to the PyTorch implementation, do you understand what they are doing and why? * How does the difficulty of this project compare to the intuitive difficulty of other projects you have worked on? Does it feel like implementing “just another neural network” or is it a rather new and hard experience? * How long does it take you to complete the different subparts? I’m not sure what good benchmark times are because I did the project with my girlfriend and we both have experience with PyTorch and ML. But here are some suggestions (I'm not sure if this is short or long; don't feel bad if it takes you longer): + 5-10 hours to build the attention mechanism, single- and multi-head attention and a transformer block. + 5 hours to build, train and test a text classifier + 5-10 hours to build, train and test a small BERT model + 5-10 hours to build, train and test a small GPT model I think that the “does it feel right” indicators are more important than the exact timings. There can be lots of random sources of error during the coding or training of neural networks that can take some time to debug. If you felt very comfortable, this might be a sign that you should apply to a technical AI alignment job. If it felt pretty hard, this might be a sign that you should skill up for a bit and then apply.  The final product ----------------- In some cases, you might want to show the result of your work to someone else. I’d recommend creating a GitHub repository for the project and creating a jupyter notebook or .py file for every major subpart. You can find our repo [here](https://github.com/mariushobbhahn/transformers_from_scratch). Don’t take our code as a benchmark to work towards, there might be errors and we might have violated some basic guidelines of professional NLP coding due to our inexperience.  Problems we encountered ----------------------- * PyTorch uses some magic under the hood, e.g. transpositions, reshapes, etc. This often made it a bit weird to double-check our implementations since they were technically correct but still yielded different results from the PyTorch implementation * PyTorch automatically inits the weights of their classes which makes it annoying to compare them. If you want to compare input-output behavior, you have to set the weights manually. * The tokenizer pipeline is a bit annoying. I found the pre-processing steps for NLP much more unintuitive than e.g. for image processing. A lot of this can be solved by using libraries such as huggingface to do the preprocessing for you. * Our models were too small in the beginning. We wanted to start with smaller transformers to make the training faster. However, since we used a relatively large dataset, the biggest computation comes from the final linear layer. Therefore, increasing the depth and width of the network or the number of attention heads doesn’t even make a big difference in the overall runtime. Larger models, as one would expect, showed better performance. How to think about AI safety up-skilling projects ================================================= In my opinion, there are three important considerations. 1. Primarily, an AI safety up-skilling project is **a way for you to calibrate yourself**. Do you feel comfortable with the difficulty or is it overwhelming? Do you enjoy the work or not? How long does it take you to finish the project and how much help was needed? The main benefit of such a project is that it is an accessible way to gain clarity about your own skills. 2. An AI safety up-skilling project should be designed to build skills. Thus, even if you realize that you are not ready to be hired, you get something out of the project. In the case of the “transformer from scratch”, for example, you get an increased understanding of transformers which is useful for other paths in AI safety. 3. You can use the project as a way to demonstrate your skills to possible employers. Note that increased clarity for your employer is beneficial even if they don’t end up hiring you. They can more clearly point you towards your current weaknesses which makes skill building easier for you. Thereby, you can more easily work on your weaknesses and re-apply one or two years later. Final words =========== I hope this is helpful. In case something is unclear, please let me know. In general, I’d be interested to see more “AI safety up-skilling challenges”, e.g. providing more detail to a subsection of Jacob’s or Gabriel’s post.
d4ebd958-2472-47ef-8a43-86fd9682f178
trentmkelly/LessWrong-43k
LessWrong
How one uses set theory for alignment problem? Is there a simple explanation how set theory, mathematical logic etc. can be used for alignment problem? From reading this post I got an impression that it is very important for the research in MIRI, or, at least, was important in 2013.  Maybe I simply don't know what the author means by this subjects. When I hear this, I am thinking about Gödel theorem, axiomatic of set theory and such stuff, and I can't imagine how is it related to the alignment. It would be nice to read something where it is explained. Thank you!
d9f9ae0c-753a-451c-8854-d1c36dca872d
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Towards deconfusing values *NB: Kaj recently said [some similar and related things](https://www.lesswrong.com/posts/2yLn8iTrvHoEgqXcJ/the-two-layer-model-of-human-values-and-problems-with) while I was on hiatus from finishing this post. I recommend reading it for a different take on what I view as a line of thinking generated by similar insights.* One of the challenges with developing a theory of human values is dealing with the apparent non-systematic nature of human decision making which makes it seem that human value are not consistent, coherent, or rational. One solution is to [build](https://intelligence.org/files/CEV.pdf) or [discover](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into) mechanisms by which they can be made legible and systematic. Another is to [embrace the illegibility and inconsistency](https://www.lesswrong.com/posts/NwxjvegAbLaBJ3TvC/towards-an-axiological-approach-to-ai-alignment-1) and [find ways of working with it](https://www.lesswrong.com/posts/7dvDgqvqqziSKweRs/formally-stating-the-ai-alignment-problem-1). This is a short start towards doing the latter because I believe the former cannot be made to work well enough to stand up against [Goodhart effects](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) under extreme optimization by superintelligent AGI that we want to align with human values. I've been [thinking](https://www.lesswrong.com/s/aQTBuq9X98m2KkWpx/p/XmqqkfY8XAJ6LkwdP) a [lot](https://www.lesswrong.com/posts/D3N2mkaZcHuSeAxch/ascetic-aesthetic#Rfgn53SYTgrCdRoif) about [what values are](https://www.lesswrong.com/s/aQTBuq9X98m2KkWpx/p/JYdPbGS9mpJn3SAyA), and in particular [looking for phenomena that naturally align](https://www.lesswrong.com/s/sv2CwqTCso8wDdmmi/p/Cu7yv4eM6dCeA67Af) with the category we variously call values, preferences, affinity, taste, aesthetics, or [axiology](https://www.lesswrong.com/s/aQTBuq9X98m2KkWpx/p/wvAEHzE55K7vfsXWz). The only thing I have found that looks like a [natural kind](https://plato.stanford.edu/entries/natural-kinds/) (viz. [a model that cuts reality at its joints](https://www.lesswrong.com/posts/esRZaPXSHgWzyB2NL/where-to-draw-the-boundaries)) is **[valence](https://www.lesswrong.com/posts/ALvnz3DrjHwmLG29F/values-valence-and-alignment)**. [Valence on its own doesn't fully explain](https://www.lesswrong.com/posts/Nfizy2uRNkZmX3AYB/preference-synthesis-illustrated-star-wars#DJvW23JsiGQSiFfq7) all the phenomena we want to categorize as values, especially things like [meta-preferences](https://www.lesswrong.com/posts/ic8yoGBMYLtaJkbxZ/conditional-meta-preferences) or "idealized" values that are abstracted away from the concrete, [embedded](https://www.lesswrong.com/posts/p7x32SEt43ZMC9r7r/embedded-agents) process of a human making a choice at a point in time. Instead it gives us a [mechanism](https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding) by which we can understand why a human makes one choice over another at some point in their [causal history](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy). And decisions are not themselves preferences, because decisions are embedded actions taken by [an agent in an environment](https://www.lesswrong.com/posts/qdqYrcGZTh9Lp49Nj/creating-environments-to-design-and-test-embedded-agents) whereas preferences are, as typically considered, generators of decisions. I think we need to flip this notion of preferences as generators on its head, and in so doing we move towards becoming less confused about preferences. So let me describe my current model of how this works, and let's see if it explains the world we find ourselves in any better than existing theories of values and preferences. The New Model Axiology ====================== [Humans are embedded agents](https://www.lesswrong.com/posts/WJzsTmsDctYCCyMfy/humans-are-embedded-agents-too). They carry out processes that result in them causing events. We call this process of an agent causing events **acting** and the process that leads to taking one action rather than any other possible action a **decision**. Although I believe valence describes much of how humans decide what actions to take, we need not consider that detail here and instead consider the abstraction of a **decision generation process** that is, importantly, inseparable from its implementation up to the limit of [functional](https://plato.stanford.edu/entries/functionalism/) equivalence and [conditioned on](https://www.lesswrong.com/posts/uHb2LDW3LGhBMyq74/preference-conditional-on-circumstances-and-past-preference) the causal history of the agent. Another way to say this is that the algorithm that makes the decision can be reasoned about and modeled but there is no simplification of the algorithm that produces exactly the same result in all cases unless it is a functionally equivalent algorithm and the decision is situated in time such that it cannot be separated from its embedding in the environment (which includes the entire past of the universe). *NB*: *I think there are a lot of interesting things to say about how the decision generation process seems to work in humans—how it comes up with the set of choices it chooses between, how it makes that choice, how it is modified, etc.—however I am going to leave off considerations of that for now so we can consider the theory at a more abstract level without getting bogged down in the implementation details of one of the gears.* *Additionally, all of this is described in terms of things like agents that don't exist until they are [reified into existence](https://www.lesswrong.com/posts/wvAEHzE55K7vfsXWz/introduction-to-noematology): prior to that reification into ontology all we have is [stuff happening](https://www.lesswrong.com/posts/M7Z5sm6KoukNpF3SD/form-and-feedback-in-phenomenology). Let's try not to get hung up on things like where to draw the boundary of an agent right now and treat the base concepts in this model as useful handles for bootstrapping understanding of a model that I expect can be reduced.* **Preferences** are then statistical regularities (probability distributions) over decisions. Importantly they come causally after decisions. Consequently preferences may predict decisions but they don't generate them. **Meta-preferences** are then probability distributions over preferences. Values, aesthetics, axiology, etc. are abstractions for talking about this category of probability distributions over decisions (and decisions about decisions, etc.). Here's a pictorial representation of the model if that helps make it clearer: ![](https://docs.google.com/drawings/d/e/2PACX-1vTLiZGRahzO8ojiZ8d_JqQ1d0cHTgbyQQSyCz_vur1C0roIYbznrESS26-eGluQvJ2ULIceTcW6ullR/pub?w=1440&h=1080)This is as opposed to the standard or "old" model where preferences are the decision generators, which I'll stylize thusly, keeping in mind there's a lot of variation in how these "old" models work that I'm glossing over: ![](https://docs.google.com/drawings/d/e/2PACX-1vQOTkaqNr63VJWR8aSMQ6NlJIwDeL4qXmZGpy3YSfIbefNhcKgIYdwn1LOwED4Lr7QXaXMMQtsM9GcR/pub?w=1440&h=1080)Note that in the new model preferences can still end up causally prior to decisions to the extent that they are discerned by an agent as features of their environment, but this is different from saying that preferences or meta-preferences are primary to the decision generation process. Thus when I say that preferences are causal postcedents of decisions I mean that if an agent did not know about or otherwise "have" preferences they would still make decisions by the decision generation process. Although backwards from the [standard model](https://plato.stanford.edu/entries/preferences/), this should not be too surprising since all animals manage to make decisions regardless of how aware they are of themselves or their actions, thus we should expect our model of values to function in the absence of decision generating preferences. Nonetheless, my guess is that this knowledge of preferences, especially knowledge of meta-preferences, feels like knowledge of the decision generation process from the inside and provides an important clue in understanding how humans might come to develop fixed points in their decision generation processes even if it really is all just valence calculations and why humans have grasped on the idea that preferences are a good model for the decision generation process. You might object that I've just rearranged the terms or that this is just a more detailed model of revealed preferences, and to some extent those things are true, but I also think I've done it in a way that pulls apart concepts that were previously confounded such that we get something more useful for addressing AI alignment, which we'll explore in more detail now. Implications and Considerations =============================== Confounded Notions of Preferences --------------------------------- When we think of preferences as the generators of decisions, we run into all sorts of confusions. For example, if we equate preferences with revealed preferences people object that their revealed preferences leave something out about the process that generated their behavior and that generalizing from their observed behavior might not work as they would expect it to when applied to novel situations. This appears to be a general problem with [most attempts](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc) at having computers learn human values today: they conflate behavior with the generators of behavior, find the generators [only by making normative assumptions](https://www.lesswrong.com/posts/cnjWN4mzmWzggRnCJ/practical-consequences-of-impossibility-of-value-learning), and then [end up](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/ANupXf8XfZo2EJxGv) with something [that almost](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/gnvrixhDfG7S2TpNL) but [doesn't quite](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/cnC2RMWEGiGpJv8go) match the generator. But if we don't pay attention to revealed preferences we are also misled about people's preferences, since, for example, what people claim to be their preferences (their stated preferences) also don't seem to do a very good job of predicting their behavior. Maybe that's because people incorrectly assume their [partial preferences](https://www.lesswrong.com/posts/CiB3myyeEhFRgmKPL/partial-preferences-and-models) are "full" preferences in a total order and [may be related](https://www.lesswrong.com/posts/XmqqkfY8XAJ6LkwdP/akrasia-is-confusion-about-what-you-want) to [scope insensitivity](https://www.lesswrong.com/posts/JTFQDdMmGC8NhQzHH/scope-insensitivity-judo); maybe it's because people are [deceiving themselves](https://www.lesswrong.com/posts/BgBrXpByCSmCLjpwr/book-review-the-elephant-in-the-brain) about their preferences for various reasons. Whatever the reason, stated preferences and revealed preferences both result in models with errors more than large enough for them to [fall apart under superintelligent optimization](https://www.lesswrong.com/posts/NqQxTn5MKEYhSnbuB/goodhart-s-curse-and-limitations-on-ai-alignment). Another problem, much commented upon by me at least, with treating preferences as generators of decisions is that this places the descriptive strength of preferences at odds with the normative demands we would like to place on preferences. For example, there's a lot to recommend rational preferences and preferences that can be described by a utility function, so people have put a lot of work into trying to find ways that these might also explain observed human behavior, even if it's to consider human behavior degraded from an ideal that it might approach if only we thought longer, knew more, etc.. But if we can create some space between the process of making decisions and the pattern of decisions made in our models this would ease much of that tension in terms of our models' abilities to explain relation and serve our purposes. Perhaps the solution lies at some synthesis of stated and revealed preferences, but that looks to me like trying to patch a broken system or put lipstick on a pig, and at the end of the day such a model may work a little better by papering over the faults of the two submodels but will also be a [kludge of epicycles that will crack if a comet comes screaming through](https://www.lesswrong.com/posts/XAFQkbe6c9TRts6Ex/what-value-epicycles). Alternatively we could look for some other method of identifying preferences, like brain scans, but at this point I think we are just arguing terminology. I could probably be convinced that calling the decision generation process "preferences" has some strong value, but from where I stand now it seems to cause more confusion than it resolves, so I'd rather see preferences treated solely as causally after decisions and talk some other way about whatever is causally before. How It Helps ------------ What are the consequences of understanding preferences as causally downstream of actions rather than causally upstream of them? And does it make any difference since we still have something—the decision generation process—doing the work that we previously asked preferences, perhaps or perhaps not modeled with a utility function, to do? In other words, how does this model help us? One of the big things it does is clear up confused thinking from getting the causal relationship between decision generation and preferences backwards. Rather than trying ever harder to find a theory that serves the two masters of accurately describing human behavior and obeying mathematical criteria that make our models behave in useful ways, we can let them operate independently. Yes, we may still want to, for example, modify human behavior to match norms, such as by increasing the rationality of human preferences, but also understand that the change doesn't come from changing preferences directly, but from changing decision generation processes such that, as a consequence, preferences are changed. And we may still want to design machines aligned with human values, but understand that aligning a machine with human preferences is not the same thing as aligning a machine with human decision generation processes since only the latter stands to capture all that humans value. Another advantage of this model is that it is more explicitly embedded in the world. Preferences are intentionally an abstraction away from many of the messy details of how decisions are made, but as a result they lose some of their grip on reality. Said another way, preferences are a leaky [abstraction](https://www.lesswrong.com/posts/wuJpYLcMEBz4kcgAn/what-is-abstraction-1), and while they may be adequate for addressing questions in microeconomics, they seem inadequate for helping us build aligned AI. [There is no leakless abstraction](https://www.lesswrong.com/posts/KJ9MFBPwXGwNpadf2/skill-the-map-is-not-the-territory), but by realizing that preferences are higher up the abstraction stack and thus more leaky we can realize the need to [go down the stack and get nearer the territory](https://www.lesswrong.com/posts/nEBbw2Bc2CnN2RMxy/gears-level-models-are-capital-investments) to find a model with [more gears](https://www.lesswrong.com/posts/B7P97C27rvHPz3s9B/gears-in-understanding) that [better captures what matters](https://www.lesswrong.com/posts/3up8XBeGGHf77sNR4/the-map-has-gears-they-don-t-always-turn), maybe even up to a limit where superintelligent optimization is no longer a threat but an opportunity. In short I think the main thing this new model does is free us from the constrictions of trying to make the preference model work with humans and accounts for the embeddedness of humans. It still doesn't say enough about how decisions are generated, but it gives us a better shaped model into which an abstraction of the implementation details can be slotted than the old model provided. Next Steps ========== I feel like what I have described in this post is only one aspect of the model that is slowly coalescing in my mind, and it is able to crystalize into something communicable only by having germs to form around provided by interacting with others. So, what have I missed, or what would you like to know that would test this theory/reframing? What, if it were true, would invalidate it? I'd love to know!
eff401c8-5f8e-4537-a670-404997fd1c2f
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Whither Manufacturing? Today's post, Whither Manufacturing? was originally published on December 2, 2008. A summary:   > There's no general reason to suppose that nanotechnology will enable a boom in local production. The location of production is a trade off between economies and dis-economies of scale. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Recursive Self-Improvement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
fcf55559-2c47-4fbb-9598-473124c2ea68
trentmkelly/LessWrong-43k
LessWrong
Open Thread, April 1-15, 2013 If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
d7439072-9517-4a06-916d-be322b321c28
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
"Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities) This is a brief, stylized recounting of a few conversations I had at some point last year with people from the non-academic AI safety community:[[1]](#fn8n7dpil3dnd)  **Me:** you guys should write up your work properly and try to publish it in ML venues. **Them:** well that seems like a lot of work and we don't need to do that because we can just talk to each other and all the people I want to talk to are already working with me. **Me:** What about the people who you don't know who could contribute to this area and might even have valuable expertise?  You could have way more leverage if you can reach those people.  Also, there is increasing interest from the machine learning community in safety and alignment... because of progress in capabilities people are really starting to consider these topics and risks much more seriously. **Them:** okay, fair point, but we don't know how to write ML papers. **Me:** well, it seems like maybe you should learn or hire people to help you with that then, because it seems like a really big priority and you're leaving lots of value on the table. **Them:** hmm, maybe... but the fact is, none of us have the time and energy and bandwidth and motivation to do that; we are all too busy with other things and nobody wants to. **Me:** ah, I see!  It's an incentive problem!  So I guess your funding needs to be conditional on you producing legible outputs. **Me, reflecting afterwards:** hmm... Cynically,[[2]](#fnp41czxf0rlc) not publishing is a really good way to create a moat around your research... People who want to work on that area have to come talk to you, and you can be a gatekeeper.  And you don't have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete... **EtA:** In comments, people have described adhering to academic standards of presentation and rigor as "jumping through hoops".  There is an element of that, but this really misses the value that these standards have to the academic community.  This is a longer discussion, though... 1. **[^](#fnref8n7dpil3dnd)**There are sort of 3 AI safety communities in my account: 1) people in academia 2) people at industry labs who are building big models 3) the rest (alignment forum/less wrong and EA being big components). I'm not sure where to classify new orgs like Conjecture and Redwood, but for the moment I put them here. I'm referring to the last of these in this case. 2. **[^](#fnrefp41czxf0rlc)**I'm not accusing anyone of having bad motivations; I think it is almost always valuable to consider both people's concious motivations and their incentives (which may be subconscious (EtA: or indirect) drivers of their behavior).
91b8c638-997e-4da5-a6ae-dd635f057544
trentmkelly/LessWrong-43k
LessWrong
Tactical vs. Strategic Cooperation As I've matured, one of the (101-level?) social skills I've come to appreciate is asking directly for the narrow, specific thing you want, instead of debating around it. What do I mean by "debating around" an issue? Things like: "If we don't do what I want, horrible things A, B, and C will happen!" (This tends to degenerate into a miserable argument over how likely A, B, and C are, or a referendum on how neurotic or pessimistic I am.) "You're such an awful person for not having done [thing I want]!" (This tends to degenerate into a miserable argument about each other's general worth.) "Authority Figure Bob will disapprove if we don't do [thing I want]!" (This tends to degenerate into a miserable argument about whether we should respect Bob's authority.) It's been astonishing to me how much better people respond if instead I just say, "I really want to do [thing I want.] Can we do that?" No, it doesn't guarantee that you'll get your way, but it makes it a whole lot more likely. More than that, it means that when you do get into negotiation or debate, that debate stays targeted to the actual decision you're disagreeing about, instead of a global fight about anything and everything, and thus is more likely to be resolved. Real-life example: Back at MetaMed, I had a coworker who believed in alternative medicine. I didn't. This caused a lot of spoken and unspoken conflict. There were global values issues at play: reason vs. emotion, logic vs. social charisma, whether her perspective on life was good or bad. I'm embarrassed to say I was rude and inappropriate. But it was coming from a well-meaning place; I didn't want any harm to come to patients from misinformation, and I was very frustrated, because I didn't see how I could prevent that outcome. Finally, at my wit's end, I blurted out what I wanted: I wanted to have veto power over any information we sent to patients, to make sure it didn't contain any factual inaccuracies. Guess what? She agreed instantly
64b26a91-2a3d-413d-94f7-f359cdc199d1
trentmkelly/LessWrong-43k
LessWrong
4. Existing Writing on Corrigibility (Part 4 of the CAST sequence) This document is an in-depth review of the primary documents discussing corrigibility that I’m aware of. In particular, I'll be focusing on the writing of Eliezer Yudkowsky and Paul Christiano, though I’ll also spend some time at the end briefly discussing other sources. As I go through the writing of those who’ve come before, I want to specifically compare and contrast those ideas with the conceptualization of corrigibility put forth in earlier documents and the strategy proposed in The CAST Strategy. At a high level I mostly agree with Christiano, except that he seems to think we’ll get corrigibility emergently, whereas I think it’s vital that we focus on directly training purely corrigible agents (and he wants to focus on recursive architectures that seem brittle and unproven, but that’s more of an aside). In my opinion this document goes into more detail than I expect >95% of readers want. I’ve tried to repeat all of the important ideas that show up in this document elsewhere, so you are encouraged to skim or just skip to the next post in the sequence: Open Corrigibility Questions. Note: I only very recently learned about Human Control: Definitions and Algorithms but haven’t yet had the time/spoons to read it in any depth. Apologies to Ryan Carey and Tom Everitt for the neglect! In this document, quotes from the source material will be indented. All quotes are from the document linked in that section. Unless noted, all bold text formatting is my addition, used to emphasize/highlight portions of the quote. Italics within quotations are always from the original source. Eliezer Yudkowsky et al. Corrigibility (2015) Let’s begin our review with the oldest writing on the topic that I’m aware of: the MIRI paper “Corrigibility” from 2015 written by Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong. (Apologies for lumping this into Yudkowsky’s section. I find it helpful to think of this as “the Yudkowsky positio
55966b07-6df7-4eca-b57d-188fc5c866af
trentmkelly/LessWrong-43k
LessWrong
Simple proofs of the age of the universe (or other things) I saw some time ago Carl Sagan asked schoolchildren for proofs that the earth is round. I can no longer find it, but they gave him sound, simple answers. One was watching a ship seem to sink below the horizon as it grows distant. One was photos of Earth from space. One, I believe, was seeing how the sun may reach the bottom of a well in the tropics, but not further north. I'm trying to find similar answers related to the age of the universe. I'll operationally define "old" as "older than human history." So far: * We know by parallax distances to various stars within 300 l y, and can tell from their distance, color, and brightness how bright a star of a given color should be. So if we see a further star and measure its color and brightness, we can deduce the distance and thereby how long it took the light to get here. (This is more complicated than I'd like, but oh well.) * If the universe started X thousand years ago, wouldn't we be seeing more stars appearing every year at the X-thousand l y range?  * There is a star that exploded? did a nova? and some decades later its light illuminated a nearby nebula. By the time it took to illuminate, we know the distance between star and nebula. Since we know the angle, we can get the length of the other legs of the triangle, which are further than light could travel in human history. (I can't remember the name of the star, alas.) * Carbon dating. * The Grand Canyon's sedimentary layers. * South America and Africa look like they fit together. If they traveled at current speeds of continental drift, they couldn't have reached their current distance in a few thousand years (plus, their animal populations would have mixed when they were touching). What else? It's interesting to think of how this works on other questions. Germs, or spontaneous generation of life? proved by canned food. Oxygen, or phlogiston? proved by a sealed-up candle. Quarks? 
588f3d9f-45af-4da5-b6ea-894ffc5cb3bd
trentmkelly/LessWrong-43k
LessWrong
New LW Meetup: Glasgow This summary was posted to LW Main on November 14th. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * Bangalore Meetup: 22 November 2014 03:56AM * Glasgow (Scotland) Meetup: 16 November 2014 03:30PM Irregularly scheduled Less Wrong meetups are taking place in: * East Coast Solstice Megameetup: 20 December 2014 03:00PM * European Community Weekend 2015: 12 June 2015 12:00PM * Saint Petersburg meetup - "with probable lectures": 14 November 2014 07:00PM * Urbana-Champaign: TRVTH: 16 November 2014 02:00PM * Utrecht: Game theory: 16 November 2014 02:00PM * Utrecht: Rationality Games: 30 November 2014 02:00PM * Warsaw November Meetup: 17 November 2014 06:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Boston: Self Therapy: 16 November 2014 03:30PM * Canberra: Liar's Dice!: 28 November 2014 06:00PM * Seattle Secular Solstice: 13 December 2014 05:30PM * [Sydney] regular meetup - Significant things I have gotten wrong: 26 November 2014 06:30PM * Vienna: 22 November 2014 03:00PM * [Vienna] A Rationalist's Guide to Strength (Vienna): 23 November 2014 02:00PM * [Vienna] Rationality Weekend Vienna: 13 December 2014 03:00PM * Washington, D.C.: To-Do List Hacking: 16 November 2014 03:00PM * West LA: Linguistic Relativity: 19 November 2014 07:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more
df39413c-f3ca-47b4-a1e3-6de97bebf3a2
trentmkelly/LessWrong-43k
LessWrong
The Dangers of Outsourcing Thinking: Losing Our Critical Thinking to the Over-Reliance on AI Decision-Making I’ve become so reliant on a GPS that using maps to direct myself feels like a foreign concept. Google Maps, Waze, whatever, if it's outside of my neighbourhood, I’m punching in the address before I head out. Sometimes I notice the GPS taking slower routes or sending me the wrong way as I get out of a parking lot, but regardless, I just follow its directions, because I don’t have to think. Though I know, without this convenient tool, I’d be lost (literally). Today, AI is making more and more decisions for us (Microsoft, 2024). The more it develops, the more we use it and trust the information it provides us. While it saves us time, it also changes the way we think. The more we trust AI to do the hard work — whether it be writing us a report, diagnosing diseases, or finding a bug in our code — the less we engage our critical thinking skills. Many of us use AI, in small ways or large, and it's important we know what psychological effects it's having on our brains. Just like relying on a GPS can make us less confident navigating on our own, depending too much on AI to decide for us can weaken our ability to think critically. Two psychological factors drive this effect— automation bias and cognitive offloading— which subtly shape how we process information and make decisions. The Erosion of Critical Thinking Critical thinking isn’t just when we use our brain, it’s a process that we use to think rationally, to understand logical connections between ideas, to evaluate arguments, and to identify inconsistencies in reasoning. It’s crucial for effective problem-solving, making informed decisions, and acquiring new knowledge (Gerlich, 2025, p.1). As AI continues to embed itself into our daily lives, we’re increasingly deferring to its outputs without scrutiny. This is a form of automation bias. This is where users favour automated solutions over their judgement, potentially ignoring contradictory information or not considering alternative options (Spatola, 2024, p.2). Au
3efc7427-071f-4004-9917-94c46da5a1b9
trentmkelly/LessWrong-43k
LessWrong
Anthropologists and "science": dark side epistemology? The American Anthropological Association has apparently decided to ditch the word "science", arguably so they can promote political messages without hindrance from empirical data. If so, this might be an example of dark side epistemology. (Articles in Psychology Today and NYT).
2b70efda-3535-4a64-8e93-1bbaa7035dc3
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes Thread July 2015 Another month, another rationality quotes thread. The rules are: * Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here. * No more than 5 quotes per person per monthly thread, please. * Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
f85f8134-8468-48cb-bf31-ad83eb466ccf
trentmkelly/LessWrong-43k
LessWrong
Switching to Electric Mandolin A little over a year ago I bought a solid-body electric mandolin. At the time I was thinking I would still mostly play my acoustic or bring both to gigs, but instead I've ended up switching entirely. I compared recordings and amplification options, and it turns out that the cleaner sound works much better in the chaos of a dance hall. I also really like that it works with a talk box and that I don't need to worry about feedback on stage anymore. I still play the acoustic when I'm jamming with friends or family, but it now spends most months without getting out of its case. Which does make me sad; it's a good mandolin and I enjoy playing it. Comment via: facebook
71cfc37d-6c97-49ea-b4bd-15c622152e96
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What I'm doing *I've found myself with an intuition that there's a lot of value in people writing down exactly what they're doing and the mechanisms/assumptions behind that. I further have the intuition that it's perfectly fine if people do this rather quickly as most of the marginal value is in doing this at all rather than making sure it's all perfect. A lot of what I'm writing below won't directly be very useful to other people, but it may be useful in terms of demonstrating how exactly I think people should write down what they're doing.* Currently, I'm doing AI Safety Movement building. Most of my projects (local movement building, AI Safety nudge competition, AI Safety Prioritisation course) are broad pipeline projects. This means that a significant component of their theory of impact is that they deliver value by marginal people increasing the size of the field. So I am implicitly assuming that either a) there's a broad set of people who can contribute to AI safety or b) even if there's only a few special people who can have any real impact that this kind of broad outreach will still find some of these people and this percentage won't be so small as to render broad outreach as essentially worthless. I should also acknowledge that even though my outreach is reasonably broad, I'm not all the way towards the end of the spectrum. For example, I'm not heavily promoting AI Safety ANZ to the general public, but mostly trying to pull in people from within EA who already have an interest in AI safety or people who I know personally who I'd love to have involved. Similarly, for the intermediate AI Safety course, we're focusing on people who have already done the fundamentals course or have spent a large amount of time engaging with AI Safety. Further, I’m trying to encourage people to do the AGI safety fundamentals course, so that there’s a baseline of common knowledge so I can target events at a higher level. This is based on the assumption that it is more important to enable already engaged people to pursue it through their career than it is to just create more engaged people. For my local community building, I don't really see outreach as my main focus. I think it's important to conduct enough outreach to build a critical mass and to engage in outreach when there's an especially good opportunity, but I don't see the size of the community as the most important factor to optimise for, so outside of this I'd only engage in direct outreach occasionally. I guess one of the key questions I'm trying to understand is what kind of community would be able to solve the alignment problem. I can't say that I've got a good answer to that yet, but I think in addition to needing highly intelligent and highly dedicated people, it's pretty important for these people to be open-minded and for these people to have a strong understanding of which parts of the problem are most important and which approaches are most promising. I’m pretty sold on the claim that most alignment work has marginal impact, but I’m not highly confident yet about which parts of the problem are most important. I‘m running events monthly in Sydney and every few weeks online. This feels like very little time compared to the amount of time that people need to skill up to the level where they contribute, so I feel that a lot of the benefit will come from keeping people engaged and increasing the proportion of people who are exposed to important ideas or perhaps even develop certain attributes. If I run some retreats they will allow me to engage people for larger amounts of time, but even so, the same considerations apply. The Sydney AI Fellowship is different in that it provides people with the time to develop deep models of the field or invest significant amounts of time developing skills, and the success of the first program suggests this could be one of the most impactful things that I run. My current goal with community building is to establish critical mass by experimenting with various events until I’m able to establish reliable turnout, then to try to figure out how to help people become the kind of people who could contribute. By running these projects I’m having some direct impact, but I’m also hoping that I’ll be able to eventually hand it off to someone else who might even be better suited to community building. I see community building projects as being easier to hand off as if successful they will draw in talented people, but a) I’d be a bit reluctant to hand it over to someone else before I had found a programme of activities that worked b) I worry that delegation is harder than it seems as you need someone who has good judgement which is hard to define AND you have to worry about their ability to further hand it off to someone competent down the line. In addition to my direct impact, I’m hoping that if I am successful there will be more cause specific movement builders at a country level. Again, this theory of impact assumes that these movement builders will have solid judgement and will be able to produce the right kind of community for making progress on this problem. Beyond this, I often write ideas up either on Lesswrong, here, Twitter, Facebook or other locations. A large part of my motivation is prob. based on a cognitive bias where I overestimate the impact of these posts, which probably only reach a few people (most for whom they aren’t decision relevant) and most of these people prob only retain a small part of that I write given the flood of content online. I guess this pushes me heavily towards thinking that it’s important find ways to build communities with common knowledge, but a) this is hard to do as people need to invest time b) it’s hard to figure out what should be common knowledge c) this can lead to conformity. I also think a lot of value of starting a local AI safety group is that it’s existence passively pushes people to think more about pursuing projects in the this space and it’s existence removes trivial inconveniences. The existence of a designated organiser makes it very easy for people to know who to reach out to if they want to know more and the existence of the group reduces peoples self-doubt and makes it easier for people to orient themselves. I’ve been having a decent number of one on one conversations recently. These conversations normally focus around people trying to understand how much of an issue it is, whether they are suited to have an impact and what needs to be done. In terms of how important it is, I try not to overstate my ML knowledge, but I try to explain why I think I can nonetheless feel confident that this is an issue. In terms of whether people can make a difference, I try to explain how a wider range of people can make a difference than most people think. In terms of what needs to be done, I try to list a bunch of ideas in the hope that it gives the impression that there are lots of projects, but I don’t think I’m doing it very well. I try to keep up with the AI safety and movement building content on the EA forum, but there’s so much content that I’m struggling. I feel I should prob. focus less on keeping up and more on reading the most important old content, but I find myself really resistant to that idea. *Anyway, I just thought this would be a valuable exercise and I thought I’d share it in case other people find this kind of exercise valuable. I guess the most important thing is to be really honest about what you’re doing and why; and then maybe it’ll become more obvious what you should be doing differently?*
6771939d-c058-4a0f-b4ea-8abcefb85490
trentmkelly/LessWrong-43k
LessWrong
Meetup : Durham HPMoR Discussion group Discussion article for the meetup : Durham HPMoR Discussion group WHEN: 20 October 2012 11:00:00AM (-0400) WHERE: Parker and Otis, 112 S Duke St, Durham NC 27701 We'll be discussing Harry Potter and the Methods of Methods of Rationality, chapters 8-11. While we encourage everyone to read the chapters in question, please feel free to come even if you haven't. We'll summarize the main points of the chapters as we discuss them. Discussion article for the meetup : Durham HPMoR Discussion group
bbb04354-c42f-40aa-9d10-82056162f867
trentmkelly/LessWrong-43k
LessWrong
Moderation Reference This is a repository of moderation decisions that we expect to make semi-frequently, where it's somewhat complicated to explain our reasoning but we don't want that explanation to end up dominating a thread. We'll be adding to this over time, and/or converting it into a more scalable format once it's grown larger. Death by a Thousand Cuts There's a phenomenon wherein a commenter responds to a post with a reasonablish looking question or criticism. The poster responds, which doesn't satisfy the commenter's concerns. It turns into a sprawling debate. Most of the time, this is fine – part of the point of LessWrong is to expose your ideas to criticism to make them stronger. But criticism varies in quality. Three particular dimensions (in descending order of importance) that we think are important are: * Steelmanning – The best criticism engages with the strongest form of an idea. See this post for more detail on why you'd want to do that. Two subsets of this are: * Does it address core points? – Sometimes a critique is pointing at essential cruxes of a person's argument. Other times it pedantically focuses on minor examples. * Does it put in interpretive effort? – Sometimes, a critic puts in substantial effort to understand the poster's point (and, if the author worded something confusingly, help them clarify their own thinking). Other times critics generally expect authors to put in all interpretive effort of the conversation. (In some situations, the issue is that the author has in fact written something confusing or wrong. In other situations, it's the critic who isn't understanding the point). * Is it kind? – While less crucial than steelmanning, LessWrong is generally a more fun to place to be if people aren't being blunt or rude to each other. All else being equal, being kind rather than blunt is better. Any given one of the three spectrums above isn't necessarily bad. We don't want a world where all criticism must involve a lot of effort on the p
f3ef1197-eb25-4e8b-80f3-903f918ce329
trentmkelly/LessWrong-43k
LessWrong
Please Bet On My Quantified Self Decision Markets There are too many possible quantified self experiments to run. Do hobbyist prediction platforms[1] make priorisation easier? I test this by setting up multiple markets, in order to run two experiments (the best one, and a random one), mostly for the effects of nootropics on absorption in meditation. dynomight 2022 has a cool proposal: > Oh, and by the way are you THE NSF or DARPA or THE NIH or A BILLIONAIRE WHO WANTS TO SPEND LOTS OF MONEY AND BRAG ABOUT HOW YOU ADVANCED THE STATE OF HUMAN KNOWLEDGE MORE THAN ALL THOSE OTHER LAME BILLIONAIRES WHO WOULDN’T KNOW A HIGH ROI IF IT HIT THEM IN THE FACE? Well how about this: > > 1. Gather proposals for a hundred RCTs that would each be really expensive but also really awesome. (E.g. you could investigate SALT → MORTALITY or ALCOHOL → MORTALITY or UBI → HUMAN FLOURISHING.) > 2. Fund highly liquid markets to predict the outcome of each of these RCTs, conditional on them being funded. > * If you have hangups about prison, you might want to chat with the CFTC before doing this. > 3. Randomly pick 5% of the proposed projects, fund them as written, and pay off the investors who correctly predicted what would happen. > 4. Take the other 95% of the proposed projects, give the investors their money back, and use the SWEET PREDICTIVE KNOWLEDGE to pick another 10% of the RCTs to fund for STAGGERING SCIENTIFIC PROGRESS and MAXIMAL STATUS ENHANCEMENT. —dynomight, “Prediction market does not imply causation”, 2022 Well, I'm neither a billionaire nor the NSF or DARPA, but I have run two shitty self-blinded RCTs on myself already, and I'm certainly not afraid of the CFTC. And indeed I don't have a shortage of ideas on things I could run RCTs on, but the time is scarce (I try to collect m=50 samples in each RCT, which (with buffer-days off) is usually more than 2 months of data collection). So I'll do what @saulmunn pointed out to me is a possibility: I'm going to do futarchy (on) myself by setting up a set of markets of Ma
0d5aa602-3797-4d12-94a2-ae50845e3edb
trentmkelly/LessWrong-43k
LessWrong
Reducing x-risk might be actively harmful Great. Another crucial consideration I missed. I was convinced that working on reducing the existential risk for humanity should be a global priority. Upholding our potential and ensuring that we can create a truly just future seems so wonderful. Well, recently I was introduced to the idea that this might actually not be the case.  The argument is rooted in suffering-focused ethics and the concept of complex cluelessness. If we step back and think critically though, what predicts suffering more than the mere existence of sentient beings—humans in particular? Our history is littered with pain and exploitation: factory farming, systemic injustices, and wars, to name just a few examples. Even with our best intentions, humanity has perpetuated vast amounts of suffering. So here’s the kicker: what if reducing existential risks isn’t inherently good? What if keeping humanity alive and flourishing actually risks spreading suffering further and faster—through advanced technologies, colonization of space, or systems we can’t yet foresee? And what if our very efforts to safeguard the future have unintended consequences that exacerbate suffering in ways we can't predict? I was also struck by the critique of the “time of perils” assumption. The idea that now is a uniquely critical juncture in history, where we can reduce existential risks significantly and set humanity on a stable trajectory, sounds compelling. But the evidence supporting this claim is shaky at best. Why should we believe that reducing risks now will have lasting, positive effects over millennia—or even that we can reduce these risks at all, given the vast uncertainties? This isn’t to say existential risk reduction is definitively bad—just that our confidence in it being good might be misplaced. A truly suffering-focused view might lean toward seeing existential risk reduction as neutral at best, and possibly harmful at worst. It’s humbling, honestly. And frustrating. Because I want to believe that by fo
b7130495-e01b-4de8-beec-fdbe2d2b1694
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA—DUMP STAT Discussion article for the meetup : West LA—DUMP STAT WHEN: 10 December 2014 07:00:00PM (-0800) WHERE: 11066 Santa Monica Blvd, Los Angeles, CA How to Find Us: Go into this Del Taco. We will be in the back room if possible. Parking is free in the lot out front or on the street nearby. Discussion: Contra Heinlein, humans in large populations ought to specialize. This means sacrificing jack-of-all for expertise in one domain. But which things are best to sacrifice? Which things do you already do that you should do less of? What should your dump stats be? Recommended Reading: * The Social Coprocessor Model * Comparative Advantage * Dump Stat on TVTropes No prior exposure to Less Wrong is required; this will be generally accessible. Discussion article for the meetup : West LA—DUMP STAT
1d4a17c0-0b00-485f-9e3a-b54300485985
trentmkelly/LessWrong-43k
LessWrong
Understanding LLMs: Some basic observations about words, syntax, and discourse [w/ a conjecture about grokking] Cross posted from New Savanna. I seem to be in the process of figuring out what I’ve learned about Large Language Models in the process of playing around with ChatGPT since December of last year. I’ve already written three posts during this phase, which I’ll call my Entanglement phase, since this re-thinking started with the idea that entanglement is the appropriate way to think about word meaning in LLMs. This post has three sections. The first section is stuff from Linguistics 101 about form and meaning in language. The second argues that LLMs are an elaborate structure of relational meaning between words and higher order structures. The third is about the distinction between sentences and higher-level structures and the significance that has for learning. I conjecture that there will come point during training when the engine learns to make that distinction consistently and that that point will lead to a phase change – grokking? – in its behavior. Language: Form and Meaning Let us start with basics: Linguists talk of form and meaning; Saussure talked of signifier and signified. That is to say, words consist of a form, or signifier, a physical signal such as a sound or a visual image, which is linked to or associated with a meaning, or signified, which is not so readily characterized and, in any event, is to be distinguished from the referent or interpretant (to use Pierce’s term). Whatever meaning is, it is something that exists in the minds/brains of speakers and only there. Large Language Models are constructed over collections of linguistic forms or signifiers. When humans read texts generated by LLMs, we supply those strings of forms with meanings. Does the LLM itself contain meanings? That’s a tricky question. On one sort of account, favored by at least some linguistics and others, no, they do not contain meanings. On a different sort of account, yes, they do. For the LLM is a sophisticated and complicated structure based on co-occurrence statistics of
38971d51-9a20-47e3-8cbc-9efbe89884cd
trentmkelly/LessWrong-43k
LessWrong
Oracle paper Available on the arXiv, my paper on two types of Oracles (AIs constrained to answering questions only), and how to use them more safely. > An Oracle is a design for potentially high power artificial intelligences (AIs), where the AI is made safe by restricting it to only answer questions. Unfortunately most designs cause the Oracle to be motivated to manipulate humans with the contents of their answers, and Oracles of potentially high intelligence might be very successful at this. Solving the problem, without compromising the accuracy of the answer, is tricky. This paper reduces the issue to a cryptographic-style problem of Alice ensuring that her Oracle answers her questions while not providing key information to an eavesdropping Eve. Two Oracle designs solve this problem, one counterfactual (the Oracle answers as if it expected its answer to never be read) and one on-policy (limited by the quantity of information it can transmit).
74450e01-5fff-411d-99f3-42c70fda7817
trentmkelly/LessWrong-43k
LessWrong
Identifying bias. A Bayesian analysis of suspicious agreement between beliefs and values. Here is a new paper of mine (12 pages) on suspicious agreement between belief and values. The idea is that if your empirical beliefs systematically support your values, then that is evidence that you arrived at those beliefs through a biased belief-forming process. This is especially so if those beliefs concern propositions which aren’t probabilistically correlated with each other, I argue. I have previously written several LW posts on these kinds of arguments (here and here; see also mine and ClearerThinking’s political bias test) but here the analysis is more thorough. See also Thrasymachus' recent post on the same theme.
a4c6f531-cb0d-499b-b1b5-12d2d9c8ca70
StampyAI/alignment-research-dataset/arxiv
Arxiv
Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs 1 Introduction --------------- There are many real-world planning problems for which domain knowledge is qualitative, and not easily encoded in a form suitable for numerical optimisation. Here, for instance, are some guiding principles that are followed by the Australian Rail Track Corporation when scheduling trains: (1) If a healthy Train is running late, it should be given equal preference to other healthy Trains; (2) A higher priority train should be given preference to a lower priority train, provided the delay to the lower priority train is kept to a minimum; and so on. It is evident from this that train-scheduling may benefit from knowing if a train is healthy, what a trains priority is, and so on. But are priorities and train-health fixed, irrespective of the context? What values constitute acceptable delays to a low-priority train? Generating good train-schedules will require a combination of quantitative knowledge of train running times and qualitative knowledge about the train in isolation, and in relation to other trains. In this paper, we propose a heuristic search method, that comes under the broad category of an estimation distribution algorithm (EDA). EDAs iteratively generate better solutions for the optimisation problem using machine-constructed models. Usually EDAs have used generative probabilistic models, such as Bayesian Networks, where domain-knowledge needs to be translated into prior distributions and/or network topology. In this paper, we are concerned with problems for which such a translation is not evident. Our interest in ILP is that it presents perhaps one of the most flexible ways to use domain-knowledge when constructing models. Recent work has shown that ILP models incorporating background knowledge were able to generate better quality solutions in each EDA iteration [ash16](#bib.bib14) . However, efficient sampling is not straightforward and ILP is unable to utilize the discovery of high level features as efficiently as deep generative models. While neural models have been used for optimization [vinyals2015pointer](#bib.bib15) , in this paper we attempt to combine the sampling and feature discovery power of deep generative models with the background knowledge captured by ILP for optimization problems that require domain knowledge. The rule based features discovered by the ILP engine are appended to the higher layers of a Deep Belief Network(DBN) while training. A subset of the features are then clamped on while sampling to generate samples consistent with the rules. This results in consistently improved sampling which has a cascading positive effect on successive iterations of EDA based optimization procedure. The rest of the paper is organised as follows. Section 2 provides a brief description of the EDA method we use for optimisation problems. Section 2.1 describes how ILP can be used within the iterative loop of an EDA for discovering rules that would distinguish good samples from bad. Section 3 Describes how RBMs can be used to generate samples that conform to the rules discovered by the ILP engine. Section 4 describes an empirical evaluation demonstrating the improvement in the discovery of optimal solutions, followed by conclusions in Section 5. 2 EDA for optimization ----------------------- The basic EDA approach we use is the one proposed by the MIMIC algorithm [mimic](#bib.bib4) . Assuming that we are looking to minimise an objective function F(x), where x is an instance from some instance-space X, the approach first constructs an appropriate machine-learning model to discriminate between samples of lower and higher value, i.e., F(x)≤θ and F(x)>θ, and then generates samples using this model Procedure EODS: Evolutionary Optimisation using DBNs for Sampling 1. Initialize population P:={xi}; θ:=θ0 2. while not converged do 1. for all xi in P label(xi) := 1 if F(xi)≤θ else label(xi) := 0 2. train DBN M to discriminate between 1 and 0 labels i.e., P(x:label(x)=1|M)>P(x:label(x)=0|M) 3. regenerate P by repeated sampling using model M 4. reduce threshold θ 3. return P Figure 1: Evolutionary optimisation using a network model to generate samples. Here we use Deep Belief Networks (DBNs) [hinton2011](#bib.bib7) for modeling our data distribution, and for generating samples for each iteration of MIMIC. Deep Belief Nets (DBNs) are generative models that are composed of multiple latent variable models called Restricted Boltzman Machines (RBMs).In particular, as part of our larger optimization algorithm, we wish to repeatedly train and then sample from the trained DBN in order to reinitialize our sample population for the next iteration as outlined in Figure [1](#S2.F1 "Figure 1 ‣ 2 EDA for optimization ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"). In order to accomplish this, while training we append a single binary unit (variable) to the highest hidden layer of the DBN, and assign it a value 1 when the value of the sample is below θ and a value 0 if the value is above θ. During training that this variable, which we refer to as the separator variable, learns to discriminate between good and bad samples. To sample from the DBN we additionally clamp our separator variable to 1 so as to bias the network to produce good samples, and preserve the DBN weights from the previous MIMIC iteration to be used as the initial weights for the subsequent iteration. This prevents retraining on the same data repeatedly as the training data for one iteration subsumes the samples from the previous iteration. We now look at how ILP models can assist DBNs constructed for this purpose. 3 EDA using ILP-assisted DBNs ------------------------------ ### 3.1 Ilp The field of Inductive Logic Programming (ILP) has made steady progress over the past two and half decades, in advancing the theory, implementation and application of logic-based relational learning. A characteristic of this form of machine-learning is that data, domain knowledge and models are usually—but not always—expressed in a subset of first-order logic, namely logic programs. Side-stepping for the moment the question “why logic programs?”, domain knowledge (called background knowledge in the ILP literature) can be encodings of heuristics, rules-of-thumb, constraints, text-book knowledge and so on. It is evident that the use of some variant of first-order logic enable the automatic construction of models that use relations (used here in the formal sense of a truth value assignment to n-tuples). Our interest here is in a form of relational learning concerned with the identification of functions (again used formally, in the sense of being a uniquely defined relation) whose domain is the set of instances in the data. An example is the construction of new features for data analysis based on existing relations (“f(m)=1 if a molecule m has 3 or more benzene rings fused together otherwise f(m)=0”: here concepts like benzene rings and connectivity of rings are generic relations provided in background knowledge). There is now a growing body of research that suggests that ILP-constructed relational features can substantially improve the predictive power of a statistical model (see, for example: [JoshiRS08](#bib.bib9) ; [Amrita12](#bib.bib11) ; [Specia\_09](#bib.bib12) ; [RamakrishnanJBS07](#bib.bib10) ; [SpeciaSRN06](#bib.bib13) ). Most of this work has concerned itself with discriminatory models, although there have been cases where they have been incorporated within generative models. In this paper, we are interested in their use within a deep network model used for generating samples in an EDA for optimisation in Procedure EODS in Fig. [1](#S2.F1 "Figure 1 ‣ 2 EDA for optimization ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"). ### 3.2 ILP-assisted DBNs Given some data instances x drawn from a set of instances X and domain-specific background knowledge, let us assume the ILP engine will be used to construct a model for discriminating between two classes (for simplicity, called good and bad). The ILP engine constructs a model for good instances using rules of the form hj: Class(x,good)←Cpj(x).111We note that in general for ILP x need not be restricted to a single object and can consist of arbitrary tuples of objects and rules constructed by the ILP engine would more generally be hj: Class(⟨x1,x2,…,xn⟩,c) ←Cpj(⟨x1,x2,…,xn⟩). But we do not require rules of this kind here. Cpj:X↦{0,1} denotes a “context predicate”. A context predicate corresponds to a conjunction of literals that evaluates to TRUE (1) or FALSE (0) for any element of X. For meaningful features we will usually require that a Cpj contain at least one literal; in logical terms, we therefore require the corresponding hj to be definite clauses with at least two literals. A rule hj:Class(x,good)←Cpj(x), is converted to a feature fj using a one-to-one mapping as follows: fj(x)=1 iff Cpj(x)=1 (and 0 otherwise). We will denote this function as Feature. Thus Feature(hj)=fj, Feature−1(fj)=hj. We will also sometimes refer to Features(H)={f:h∈H and f=Feature(h)} and Rules(F)={h:f∈F and h=Features−1(f)}. Each rule in an ILP model is thus converted to a single Boolean feature, and the model will result in a set of Boolean features. Turning now to the EODS procedure in Fig. [1](#S2.F1 "Figure 1 ‣ 2 EDA for optimization ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"), we will construct ILP models for discriminating between F(x)≤θ) (good) and F(x)>θ (bad). Conceptually, we treat the ILP-features as high-level features for a deep belief network, and we append the data layer of the highest level RBM with the values of the ILP-features for each sample as shown in Fig [2](#S3.F2 "Figure 2 ‣ 3.2 ILP-assisted DBNs ‣ 3 EDA using ILP-assisted DBNs ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"). | | | | --- | --- | | Sampling from a DBN (a) with just a separator variable (b) with ILP features | Sampling from a DBN (a) with just a separator variable (b) with ILP features | Figure 2: Sampling from a DBN (a) with just a separator variable (b) with ILP features ### 3.3 Sampling from the Logical Model Recent work [ash16](#bib.bib14) suggests that if samples can be drawn from the success-set of the ILP-constructed model222These are the instances entailed by the model along with the background knowledge, which—assuming the rules are not recursive—we take to be the union of the success-sets of the individual rules in the model. then the efficiency of identifying near-optimal solutions could be significantly enhanced. A straightforward approach of achieving this with an ILP-assisted DBN would appear to be to clamp all the ILP-features, since this would bias the the samples from the network to sample from the intersection of the success-sets of the corresponding rules (it is evident that instances in the intersection are guaranteed to be in the success-set sought). However this will end up being unduly restrictive, since samples sought are not ones that satisfy all rules, but at least one rule in the model. The obvious modification would be to clamp subsets of features. But not all samples from a subset of features may be appropriate. With a subset of features clamped, there is an additional complication that arises due to the stochastic nature of the DBN’s hidden units. This makes it possible for the DBN’s unit corresponding to a logical feature fj to have the value 1 for an instance xi, but for xi not to be entailed by the background knowledge and logical rule hj. In turn, this means that for a feature-subset with values clamped, samples may be generated from outside the success-set of corresponding rules involved. Given background knowledge B, we say a sample instance x generated by clamping a set of features F is aligned to H=Rules(F), iff B∧H⊨x (that is, x is entailed by B and H). A procedure to bias sampling of instances from the success-set of the logical model constructed by ILP is shown in Fig. [3](#S3.F3 "Figure 3 ‣ 3.3 Sampling from the Logical Model ‣ 3 EDA using ILP-assisted DBNs ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"). Given: Background knowledge B; a set of rules H={h1,h2,…,hN}l; a DBN with F={f1,f2,…,fN} as high-level features (fi=Feature(hi)); and a sample size M Return: A set of samples {x1,x2,…,xM} drawn from the success-set of B∧H. 1. S:=∅,k=0 2. while |k|≤N do 1. Randomly select a subset Fk of size k features from F 2. Generate a small sample set X clamping features in Fk 3. for each sample in x∈X and each rule hj, set countk=0 1. if x∈ success-set where (fj(x)=1)=>(x∈ success-set(B and hj)) countk=countk+1 3. Generate S by clamping k features where countk=max(count1,count2...countN) 4. return S Figure 3: A procedure to generate samples aligned to a logical model H constructed by an ILP engine. 4 Empirical Evaluation ----------------------- ### 4.1 Aims Our aims in the empirical evaluation are to investigate the following conjectures: 1. On each iteration, the EODS procedure will yield better samples with ILP features than without 2. On termination, the EODS procedure will yield more near-optimal instances with ILP features than without. 3. Both procedures do better than random sampling from the initial training set. It is relevant here to clarify what the comparisons are intended in the statements above. Conjecture (1) is essentially a statement about the gain in precision obtained by using ILP features. Let us denote Pr(F(x)≤θ) the probability of generating an instance x with cost at most θ without ILP features to guide sampling, and by Pr(F(x)≤θ|Mk,B) the probability of obtaining such an instance with ILP features Mk,B obtained on iteration k of the EODS procedure using some domain-knowledge B. (note if Mk,B=∅, then we will mean Pr(F(x)≤θ|Mk,B) = Pr(F(x)≤θ)). Then for (1) to hold, we would require Pr(F(x)≤θk|Mk,B)>Pr(F(x)≤θk). given some relevant B. We will estimate the probability on the lhs from the sample generated using the model, and the probability on the rhs from the datasets provided. Conjecture (2) is related to the gain in recall obtained by using the model, although it is more practical to examine actual numbers of near-optimal instances (true-positives in the usual terminology). We will compare the numbers of near-optimal in the sample generated by the DBN model with ILP features, to those obtained using the DBN alone. ### 4.2 Materials #### 4.2.1 Data We use two synthetic datasets, one arising from the KRK chess endgame (an endgame with just White King, White Rook and Black King on the board), and the other a restricted, but nevertheless hard 5×5 job-shop scheduling (scheduling 5 jobs taking varying lengths of time onto 5 machines, each capable of processing just one task at a time). The optimisation problem we examine for the KRK endgame is to predict the depth-of-win with optimal play [bain:krkwin](#bib.bib1) . Although aspect of the endgame has not been as popular in ILP as task of predicting “White-to-move position is illegal” [bain:krkillegal](#bib.bib2) , it offers a number of advantages as a Drosophila for optimisation problems of the kind we are interested. First, as with other chess endgames, KRK-win is a complex, enumerable domain for which there is complete, noise-free data. Second, optimal “costs” are known for all data instances. Third, the problem has been studied by chess-experts at least since Torres y Quevado built a machine, in 1910, capable of playing the KRK endgame. This has resulted in a substantial amount of domain-specific knowledge. We direct the reader to [breda:thesis](#bib.bib3) for the history of automated methods for the KRK-endgame. For us, it suffices to treat the problem as a form of optimisation, with the cost being the depth-of-win with Black-to-move, assuming minimax-optimal play. In principle, there are 643 ≈260,000 possible positions for the KRK endgame, not all legal. Removing illegal positions, and redundancies arising from symmetries of the board reduces the size of the instance space to about 28,000 and the distribution shown in Fig. [4](#S4.F4 "Figure 4 ‣ 4.2.1 Data ‣ 4.2 Materials ‣ 4 Empirical Evaluation ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs")(a). The sampling task here is to generate instances with depth-of-win equal to 0. Simple random sampling has a probability of about 1/1000 of generating such an instance once redundancies are removed. The job-shop scheduling problem is less controlled than the chess endgame, but is nevertheless representative of many real-life applications (like scheduling trains), and in general, is known to be computationally hard. Cost Instances Cost Instances 0 27 (0.001) 9 1712 (0.196) 1 78 (0.004) 10 1985 (0.267) 2 246 (0.012) 11 2854 (0.368) 3 81 (0.152) 12 3597 (0.497) 4 198 (0.022) 13 4194 (0.646) 5 471 (0.039) 14 4553 (0.808) 6 592 (0.060) 15 2166 (0.886) 7 683 (0.084) 16 390 (0.899) 8 1433 (0.136) draw 2796 (1.0) Total Instances: 28056 (a) Chess Cost Instances Cost Instances 400–500 10 (0.0001) 1000–1100 24067 (0.748) 500–600 294 (0.003) 1100–1200 15913 (0.907) 600–700 2186 (0.025) 1200–1300 7025 (0.978) 700–800 7744 (0.102) 1300–1400 1818 (0.996) 800–900 16398 (0.266) 1400–1500 345 (0.999) 900–1000 24135 (0.508) 1500–1700 66 (1.0) Total Instances: 100000 (b) Job-Shop Figure 4: Distribution of cost values. The number in parentheses are cumulative frequencies. Data instances for Chess are in the form of 6-tuples, representing the rank and file (X and Y values) of the 3 pieces involved. For the RBM, these are encoded as 48 dimensional binary vector where every eight bits represents a one hot encoding of the pieces’ rank or file. At each iteration k of the EODS procedure, some instances with depth-of-win ≤θk and the rest with depth-of-win >θk are used to construct the ILP model, and the resulting features are appended to train the RBM model as described in Section [3.2](#S3.SS2 "3.2 ILP-assisted DBNs ‣ 3 EDA using ILP-assisted DBNs ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs").333The θk values are pre-computed assuming optimum play. We note that when constructing a model on iteration k, it is permissible to use all instances used on iterations 1,2,…,(k−1) to obtain data for model-construction. Data instances for Job-Shop are in the form of schedules, with associated start- and end-times for each task on a machine, along with the total cost of the schedule. On iteration i of the EODS procedure, models are to be constructed to predict if the cost of schedule will be ≤θi or otherwise.444The total cost of a schedule includes any idle-time, since for each job, a task before the next one can be started for that job. Again, on iteration i, it is permissible to use data from previous iterations. #### 4.2.2 Background Knowledge For Chess, background predicates encode the following (WK denotes the White King, WR the White Rook, and BK the Black King): (a) Distance between pieces WK-BK, WK-BK, WK-WR; (b) File and distance patterns: WR-BK, WK-WR, WK-BK; (c) “Alignment distance”: WR-BK; (d) Adjacency patterns: WK-WR, WK-BK, WR-BK; (e) “Between” patterns: WR between WK and BK, WK between WR and BK, BK between WK and WR; (f) Distance to closest edge: BK; (g) Distance to closest corner: BK; (h) Distance to centre: WK; and (i) Inter-piece patterns: Kings in opposition, Kings almost-in-opposition, L-shaped pattern. We direct the reader to [breda:thesis](#bib.bib3) for the history of using these concepts, and their definitions. A sample rule generated for Depth<=2 is that the distance between the files of the two kings be greater than or equal to zero, and that the ranks of the kings are seperated bya a distance of less than five and those of the white king and the rook by less than 3. For Job-Shop, background predicates encode: (a) schedule job J “early” on machine M (early means first or second); (b) schedule job J “late” on machine M (late means last or second-last); (c) job J has the fastest task for machine M; (d) job J has the slowest task for machine M; (e) job J has a fast task for machine M (fast means the fastest or second-fastest); (f) Job J has a slow task for machine M (slow means slowest or second-slowest); (g) Waiting time for machine M; (h) Total waiting time; (i) Time taken before executing a task on a machine. Correctly, the predicates for (g)–(i) encode upper and lower bounds on times, using the standard inequality predicates ≤ and ≥. #### 4.2.3 Algorithms and Machines The ILP-engine we use is Aleph (Version 6, available from A.S. on request). All ILP theories were constructed on an Intel Core i7 laptop computer, using VMware virtual machine running Fedora 13, with an allocation of 2GB for the virtual machine. The Prolog compiler used was Yap, version 6.1.3555<http://www.dcc.fc.up.pt/~vsc/Yap/>. The RBM was implemented in the Theano library, and run on an NVidia Tesla K-40 GPU Card. ### 4.3 Method Our method is straightforward: * For each optimisation problem, and domain-knowledge B: + Using a sequence of threshold values ⟨θ1,θ2,…,θn⟩ on iteration k (1≤k≤n) for the EODS procedure: 1. Obtain an estimate of Pr(F(x)≤θk) using a DBN with a separator variable; 2. Obtain an estimate of Pr(F(x)≤θk|Mk,B) by constructing an ILP model for discriminating between F(x)≤θk and F(x)>θk. Use the features learnt by the ILP model to guide the DBN sampling. 3. Compute the ratio of Pr(F(x)≤θk|Mk,B) to P(F(x)≤θk) The following details are relevant: * The sequence of thresholds for Chess are ⟨8,4,2,0⟩. For Job-Shop, this sequence is ⟨900,890,880...600⟩; Thus, θ∗ = 0 for Chess and 600 for Job-Shop, which means we require exactly optimal solutions for Chess. * Experience with the use of ILP engine used here (Aleph) suggests that the most sensitive parameter is the one defining a lower-bound on the precision of acceptable clauses (the minacc setting in Aleph). We report experimental results obtained with minacc=0.7, which has been used in previous experiments with the KRK dataset. The background knowledge for Job-Shop does not appear to be sufficiently powerful to allow the identification of good theories with short clauses. That is, the usual Aleph setting of upto 4 literals per clause leaves most of the training data ungeneralised. We therefore allow an upper-bound of upto 10 literals for Job-Shop, with a corresponding increase in the number of search nodes to 10000 (Chess uses the default setting of 4 and 5000 for these parameters). * In the EODS procedure, the initial sample is obtained using a uniform distribution over all instances. Let us call this P0. On the first iteration of EODS (k=1), the datasets E1+ and E1− are obtained by computing the (actual) costs for instances in P0, and an ILP model M1,B, or simply M1, constructed. A DBN model is constructed both with and without ILP features. We obtained samples from the DBN with CD6 or by running the Gibbs chain for six iterations. On each iteration k, an estimate of Pr(F(x)≤θk) can be obtained from the empirical frequency distribution of instances with values ≤θk and >θk. For the synthetic problems here, these estimates are in Fig. [4](#S4.F4 "Figure 4 ‣ 4.2.1 Data ‣ 4.2 Materials ‣ 4 Empirical Evaluation ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"). For Pr(F(x)≤θk|Mk,B), we use obtain the frequency of F(x)≤θk in Pk * Readers will recognise that the ratio of Pr(F(x)≤θk|Mk,B) to P(F(x)≤θk) is equivalent to computing the gain in precision obtained by using an ILP model over a non-ILP model. Specifically, if this ratio is approximately 1, then there is no value in using the ILP model. The probabilities computed also provide one way of estimating sampling efficiency of the models (the higher the probability, the fewer samples will be needed to obtain an instance x with F(x)≤θk). ### 4.4 Results Results relevant to conjectures (1) and (2) are tabulated in Fig. [5](#S4.F5 "Figure 5 ‣ 4.4 Results ‣ 4 Empirical Evaluation ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs") and Fig. [6](#S4.F6 "Figure 6 ‣ 4.4 Results ‣ 4 Empirical Evaluation ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs"). The principal conclusions that can drawn from the results are these: 1. For both problems, and every threshold value θk, the probabilty of obtaining instances with cost at most θk with ILP-guided RBM sampling is substantially higher than without ILP. This provides evidence that ILP-guided DBN sampling results in better samples than DBN sampling alone(Conjecture 1); 2. For both problems and every threshold value θk, samples obtained with ILP-guided sampling contain a substantially higher number of near-optimal instances than samples obtained using a DBN alone (Conjecture 2) Additionally, Fig. [7](#S4.F7 "Figure 7 ‣ 4.4 Results ‣ 4 Empirical Evaluation ‣ Neuro-symbolic EDA-based Optimisation using ILP-enhanced DBNs") demonstrates the cumulative impact of ILP on (a) the distribution of good solutions obtained and (b)the cascading improvement over the DBN alone for the Job Shop problem. The DBN with ILP was able to arrive at the optimal solution within 10 iterations. Model Pr(F(x)≤θk|Mk) k=1 k=2 k=3 k=4 None 0.134 0.042 0.0008 0.0005 DBN 0.220 0.050 0.015 0.0008 DBNILP 0.345 0.111 0.101 0.0016 (a) Chess Model Pr(F(x)≤θk|Mk) k=1 k=2 k=3 k=4 None 0.040 0.036 0.029 0.024 DBN 0.209 0.234 0.248 0.264 DBNILP 0.256 0.259 0.268 0.296 (b) Job-Shop Figure 5: Probabilities of obtaining good instances x for each iteration k of the EODS procedure. That is, the column k=1 denotes P(F(x)≤θ1 after iteration 1; the column k=2 denotes P(F(x)≤θ2 after iteration 2 and so on. In effect, this is an estimate of the precision when predicting F(x)≤θk. “None” in the model column stands for probabilities of the instances, corresponding to simple random sampling (Mk=∅). Model Near-Optimal Instances k=1 k=2 k=3 k=4 DBN 5/27 11/27 11/27 12/27 DBNILP 3/27 17/27 21/27 22/27 (a) Chess Model Near-Optimal Instances k=11 k=12 k=13 DBN 7/304 10/304 18/304 DBNILP 9/304 18/304 27/304 (b) Job-Shop Figure 6: Fraction of near-optimal instances (F(x)≤θ∗) generated on each iteration of EODS. In effect, this is an estimate of the recall (true-positive rate, or sensitivity) when predicting F(x)≤θ∗. The fraction a/b denotes that a instances of b are generated. | | | | --- | --- | | Impact of ILP on EODS procedure for Job Shop (a) Distribution of solution Endtimes generated on iterations 1, 5, 10 and 13 with and without ILP (b) Cumulative semi-optimal solutions obtained with and without ILP features over 13 iterations | Impact of ILP on EODS procedure for Job Shop (a) Distribution of solution Endtimes generated on iterations 1, 5, 10 and 13 with and without ILP (b) Cumulative semi-optimal solutions obtained with and without ILP features over 13 iterations | Figure 7: Impact of ILP on EODS procedure for Job Shop (a) Distribution of solution Endtimes generated on iterations 1, 5, 10 and 13 with and without ILP (b) Cumulative semi-optimal solutions obtained with and without ILP features over 13 iterations 5 Conclusions and Future Work ------------------------------ In this paper we demonstrate that DBNs can be used as efficient samplers for EDA style optimization approaches. We further look at combining the sampling and feature discovery power of Deep Belief Networks with the background knowledge discovered by an ILP engine, with a view towards optimization problems that entail some degree of domain information. The optimization is performed iteratively via an EDA mechanism and empirical results demonstrate the value of incorporating ILP based features into the DBN. In the future we intend to combine ILP based background rules with more sophisticated deep generative models proposed recently [Gregor14](#bib.bib5) ; [Gregor15](#bib.bib6) and look at incorporating the rules directly into the cost function as in [hu16](#bib.bib8) .
1766b9ac-5159-4958-93d2-23b9082545a6
trentmkelly/LessWrong-43k
LessWrong
Help Request: How to maintain focus when emotionally overwhelmed So my personal life just got very interesting. In a net-positive way, certainly, but still, I am, as Calculon put it, "filled with a large number of powerful emotions!" -- some of which are anxious and/or panicky. This is making it annoyingly difficult to focus at work. I am an absolutely textbook "Attention Deficit Oh-look-a-squirrel!" case at the best of times, and this seems to have made it much, much worse. I can handle small tasks, but anything where I'm going to have to spend an hour solving multiple problems before producing results, I can hardly make myself start. Has anyone dealt with the problem of maintaining productive focus while emotionally overwhelmed/exhausted, and if so, do you have any pointers?
9750d1b2-9491-4bea-91a5-0f1336df1f4f
StampyAI/alignment-research-dataset/special_docs
Other
Astronomical Waste: The Opportunity Cost of Delayed Technological Development: Nick Bostrom 1Astronomical Waste: The Opportunity Cost of Delayed Technological Development NICK BOST ROM Oxford University ABSTRACT. With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumption s, this cost is extremely large. However, the lesson for standard utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur. This goal has such high utility that standard utilitarians ought to focus all their efforts on it. Utilitarians of a ‘person- affecting’ stripe should accept a modified version of this conclusion. Some mixed ethical views, which combine utilitarian considerat ions with other criteria, will also be committed to a similar bottom line. I.THE RATE OF LOSS OF POTENTIAL LIVES As I wri te these words, suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is Russian, Portuguese, Bosnian Translations: 2be ing irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives. The rate of this loss boggles the mind. One recent paper speculates, using loose theoretical considerations based on the rate of increase of entropy, that the loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonization.1 This estimate assumes that all the lost entropy could have been used for productive purposes, although no currently known technological mechanisms are even remotely capable of doing that. Since the estimate is meant to be a lower bound, this radically unconservative assumption is undesirable. We can, however, get a lower bound more straightforwardly by simply counting the number or stars in our galactic supercluster and multiplying this number with the amount of computing power that the resources of each star could be used to generate using technologies for whose feasibility a strong case has already been made. We can then divide this total with the estimated amount of computing power needed to simulate one human life. As a rough approximation, let us say the Virgo Supercluster contains 10 13 stars. One estimate of the computing power extractable from a star and with an associated planet-sized computational structure, using advanced molecular nanotechnology 2, is 1042 operations per second.3 A typical estimate of the human brain’s processing power is roughly 1017 operations per second or less.4 Not much more seems to be needed to simulate the relevant parts of the environment in sufficient detail to enable the simulated minds to have experiences indistinguishable from typical current human experiences.5 3Given these estimates, it follows that the potential for approximately 1038 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 1029 potential human lives per second. While this estimate is conservative in that it assumes only computational mechanisms whose implementation has been at least outlined in the literature, it is useful to have an even more conservative estima te that does not a ssume a non-biological instantiation of the potential persons. Suppose that about 1010 biological humans could be sustained around an average star. Then the Virgo Supercluster could contain 1023 biological humans. This corresponds to a loss of potential of over 1013 potential human lives per second of delayed colonization. What matters for present purposes is not the exact numbers but the fact that they are huge. Even with the most conservative estimate, assuming a biological implementation of all persons, the potential for over ten trillion potential human beings is lost for every second of postponement of colonization of our supercluster.6 II. THE OPPORTUNITY COST OF DELAYED COLONIZATION From a utilitarian perspective, this huge loss of potential human lives constitutes a correspondingly huge loss of potential value. I am assuming here that the human lives that could have been created would have been worthwhile ones. Since it is commonly supposed that even current human lives are typically worthwhile, this is a weak assumption. Any civilization advanced enough to colonize the local supercluster would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living. 4The effect on tot al value, then, seems greater fo r actions th at accelerate technological d evelopm ent than f or practically a ny othe r possible acti on. Advancing technology (o r its enabling factors, such as ec onomic productivity) e ven b y such a tiny amount that it lead s to colonizatio n of the l ocal supercluster j ust one second earli er than would otherwise have happ ened amoun ts to b ringing abou t more than 1029 human lives (or 1013 human lives if we use the most conservativ e lower bound) that would not otherwise have existed. Few oth er ph ilanthropic causes could hope to match that level of utilitarian payoff. Utilitarians are no t the only ones who shou ld strong ly oppose astronomical waste. There are ma ny views about what h as value that would concur with th e assessment that the current rat e of wastag e constitutes a n eno rmous lo ss of potential v alue. Fo r exa mple, we can ta ke a thicke r concepti on of h uman we lfare than commonly supposed b y utilitarians (whether of a he donistic, e xperientialist, o r desire-satisfactioni st bent), such as a conception th at locate s valu e also in human flourishing, m eaningf ul relationshi ps, noble character, in dividual expre ssion, aesthetic appreciation, and so forth. So long as th e evaluation function is aggregativ e (doe s not count o ne perso n’s welfare for less just becaus e there are many other persons in existence who also e njoy happy lives) and is not relativized to a particular point in time (no t ime-disco unting ), the conclusion will ho ld. These c onditions c an be r elaxed further. Even if the welfare function is not perfectly a ggregativ e (perhaps b ecause o ne component of the good is diversity, th e marginal rate of production o f which might decline with in creasing population size), it can sti ll yield a similar bottom line pro vided only that at least some si gnificant 5component of the good is sufficiently aggregative. Similarly, some degree of time- discounting future goods could be accommodated without changing the conclusion.7 III. THE CHIEF GOAL FOR UTILITARIANS SHOULD BE TO REDUCE EXISTENTIAL RISK In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development. However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed co lonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. 8 Because the lifespan of galaxies is measured in billions of years, whereas th e time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of- view) a delay of over 10 million years. Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. 6For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative ‘Maximize expected aggregate utility!’ can be simplified to the maxim ‘Minimize existential risk!’. IV. IMPLICATIONS FOR AGGREGATIVE PERSON-AFFECTING VIEWS The argument above presupposes that our concern is to maximize the total amount of well-being. Suppose instead that we adopt a ‘person-affecting’ version of utilitarianism, according to which our obligations are primar ily towards currently existing persons and to those persons that will come to exist.9 On such a person-affecting view, human extinction would be bad only because it makes past or ongoing lives worse, not because it constitutes a loss of potential worthwhile lives. What ought someone who embraces this doctrine do? Should he emphasize speed or safety, or something else? To answer this, we need to consider some further matters. Suppose one thinks that the probability is negligible that any existing person will survive long enough to get to use a significant portion of the accessible astr onomical resources, which, as described in opening section of this paper, are gradually going to waste. Then one’s reason for minimizing existential risk is that sudden ex tinction would off cut an average of, say, 40 years from each of the current (six billion or so) human lives. 10 While this would certainly be a large disaster, it is in the same big ballpark as other ongoing human tragedies, such as world poverty, hunger and disease. On this assumption, then, a person- affecting utilitarian should regard reducing existential risk as a very important but not completely dominating concern. There would in this case be no easy answer to what he ought to do. Where he ought to focus his efforts would depend on detailed calculations 7about which area of philanthropic activity he would happen to be best placed to make a contribution to. Arguably, however, we ought to assign a non-negligible probability to some current people surviving long enough to reap the benefits of a cosmic diaspora. A so-called technological ‘singularity’ mi ght occur in our natural lifetime 11, or there could be a breakthrough in life-extension, brought about, perhaps, as result of machine-phase nanotechnology that would give us unprecedented control over the biochemical processes in our bodies and enable us to halt and reverse the aging process. 12 Many leading technologists and futurist thinkers give a fairly high probability to these developments happening within the next several decades.13 Even if you are skeptical about their prognostications, you should consider the poor track record of technological forecasting. In view of the well-establis hed unreliability of many such forecasts, it would seem unwarranted to be so confident in one’s prediction that the requisite breakthroughs will not occur in our time as to give the hypothes is that they will a probability of less than, say, 1%. The expected utility of a 1% chance of realizing an astronomically large good could still be astronomical. But just how good would it be for (some substantial subset of) currently living people to get access to astronomical amounts of resources? The answer is not obvious. On the one hand, one might reflect that in today’s world, the marginal utility for an individual of material resources declines quite rapidly ones his basic needs have been met. Bill Gate’s level of well-being doe s not seem to dramatically exceed that of many a person of much more modest means. On the other hand, advanced technologies of the sorts that would most likely be deployed by the time we could colonize the local 8supercluster may well provide new ways of converting resources into well-being . In particular, material re sources could be us ed to gre atly expand our mental c apacities a nd to indefinitely prolong our subjecti ve lifes pan. And it is b y no m eans clea r that the marginal ut ility of ex tended h ealthspan and in creased mental powers mu st be sharply declin ing above some l evel. If there is no such decline in marginal utilit y, we have to conclude that the ex pected utility to current in dividuals14 of su ccessful colonization of our supercluster is astronomically g reat, an d this conclusi on holds even if on e gives a fairly low probability to that outcome. A long shot i t may be, bu t for an expected utility maximize r, the benefit of livi ng for perhap s billi ons of s ubjective yea rs with greatly expanded ca pacities under fantasticall y favo rable condi tions could mo re than ma ke up for the remote pros pects of success. Now, if th ese assumptions a re made, wh at follow s about ho w a person-affecting utilitarian should ac t? Clearly, a voiding e xistential calami ties is im porta nt, not just because it wo uld truncate the natural lifespan of six billion or so people, b ut also – and given th e assump tions this is an even weight ier consideration – b ecause it wo uld extin guish th e chance that current peo ple have of r eaping th e enormous benefits of eventual colo nization. However, by co ntrast to the total u tilitarian , the perso n-affecting utilitaria n woul d have to balanc e this g oal w ith another equally important des ideratum, namely that of m aximizing t he chances o f curren t peop le surviving to benefit from the colonizatio n. For the person-affecting u tilitar ian, it is not en ough that humankind survives to colonize; it is crucial that extant people be saved. This should lead her to emphasize speed of technological d evelopment, since the rap id arrival of advanced technology wou ld surely b e needed to help cu rrent peop le stay alive until the fru its of 9colonization could be harvested. If the goal of speed conflicts with the goal of global safety, the total utilitarian should always opt to maximize safety, but the person-affecting utilitarian would have to balance the risk of people dying of old age with the risk of them succumbing in a species-destroying catastrophe. Mixed ethical views, which also incorporate non-utilitarian elements, might or might not yield one of these bottom lines depending on the nature of what is added. 15 1 M. Cirkovic, ‘Cosmological Forecast and its Practical Significance’, Journal of Evolution and Technology , xii (2002), http://www.jetpress.org/volume12/CosmologicalForecast.pdf . 2 K. E. Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation , New York, John Wiley & Sons, Inc., 1992. 3 R. J. Bradbury, ‘Matrioshka Brains’, Manuscript , 2002, http://www.aeiveos.com/~bradbury/MatrioshkaBrains/MatrioshkaBrains.html 4 N. Bostrom, ‘How Long Before Superintelligence?’, International Journal of Futures Studies ii (1998); R. Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , New York, Viking, 1999. The lower estimate is in H. Moravec, Robot: Mere Machine to Transcendent Mind , Oxford, 1999. 5 N. Bostrom, ‘Are You Living in a Simulation?’, Philosophical Quarterly , liii (211). See also http://www.simulation-argument.com . 6 The Virgo Supercluster contains only a small part of the colonizable resources in the universe, but it is sufficiently big to make the point. The bigger the region we consider, the less certain we can be that significant parts of it will not have been colonized by a civilization of non-terrestrial origin by the time we could get there. 10 7 Utilitarians commonly regard time-discounting as inappropriate in evaluating moral goods (see e.g. R. B. Brandt, Morality, Utilitarianism, and Rights , Cambridge, 1992, pp. 23f.). However, it is not clear that utilitarians can avoid compromising on this principle in view of the possibility that our actions could conceivably have consequences for an infinite number of persons (a possibility that we set aside for the purposes of this paper). 8 N. Bostrom, ‘Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards’, Journal of Evolution and Technology , ix (2002), http://www.jetpress.org/volume9/risks.html . 9 This formulation of the position is not necessarily the best possible one, but it is simple and will serve for the purposes of this paper. 10 Or whatever the population is likely to be at the time when doomsday would occur. 11 See e.g. V. Vinge, ‘The Coming Technological Singularity’, Whole Earth Review , Winter issue (1993). 12 R. A. Freitas Jr., Nanomedicine , Vol. 1, Georgetown, Landes Bioscience, 1999. 13 E.g. Moravec, Kurzweil, and Vinge op. cit.; E. Drexler, Engines of Creation , New York, Anchor Books, 1986. 14 On the person-affecting view, the relevant factor is not the temporal location of a person per se . In principle, effects on the well-being of a past or a future person could also be an appropriate target for moral concern. In practice, however, the effect on the well-being of past persons is likely to be relatively small or even zero (depending on which conception of well-being one adopts), and the effect (of any action that has a significant effect on the future) on not-yet-existing persons is very likely to be such that competing alternative actions would lead to separate sets of possible persons coming into existence, for example as a result of different sperms fertilizing different eggs, and thus there would typically be no future persons whose level of well-being is affected by our current actions. Our actions might affect which future persons there will be, but on the person-affecting view, such results provide no general moral reasons for action. One exception are fetuses and fertilize eggs, which are not yet persons, but which may become persons in the future, and whose lives our current actions may influence. 15 I’m grateful for comments from John Broome, Milan Cirkovic, Roger Crisp, Robin Hanson, and James Lenman, and for the financial support of a British Academy Postdoctoral Award.
1c2948c0-1d08-4809-b49e-1631aa0328f9
trentmkelly/LessWrong-43k
LessWrong
Meetup : Moscow meetup: Quantum physics is fun Discussion article for the meetup : Moscow meetup: Quantum physics is fun WHEN: 26 October 2014 03:00:00PM (+0400) WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16 Here's our plan: * Calibration excercise announce. * Temporal symmetry in quantum theory and/or philosophical problems of the modern quantum physics. * Structuring excercise in pairs/microgroups. * A talk about Chomsky and/or Pinker. Details and schedule: https://lesswrong-ru.hackpad.com/-26--ljrbZz2bAxa Yudcoins, positive reinforcement and pizza will all be present. If you've been to our meetups, you know what I'm talking about, and if you didn't, the best way to find out is to come and see for yourself. Info for newcomers: We gather in the Yandex office, you need the first revolving door under the archway. Here is a guide how to get there: http://company.yandex.ru/contacts/redrose/ Try to come in time, we will allow latecomers to enter every 15 minutes. Call Slava or send him SMS at +7(926)313-96-42 if you're late. We start at 14:00 and stay until at least 19-20. Please pay attention that we only gather near the entrance and then come inside. Discussion article for the meetup : Moscow meetup: Quantum physics is fun
19b9b2d8-9368-4456-871c-65fed42dea53
trentmkelly/LessWrong-43k
LessWrong
The Pit If you brought a man with keen ears to the edge of the pit and dropped a quarter over exactly the right spot, you could count to eleven before he heard it hit the ground. If you next told the man that a sliver of sunlight was visible from the very bottom of said pit, he might have squinted at you skeptically. If you proceeded to say that the bowels of this same pit were inhabited by twenty-odd live human beings, he would certainly have slapped you across the side of the head and called you a shameless liar. But you wouldn’t have lied once. The inhabitants of the pit – the pitfolk – were frail people, bone-pale from the perennial lack of sunlight, all taut skin wrapped about wan elbows. However they shifted their bodies to and fro, they were bound – as if by cowed by that measly sliver of sunlight – to walk hunched over, keeping their faces downcast. Through the decades, the pitfolk developed an extraordinary black and white vision, as all they could see were the meager shadows which shifted around their ankles. From these faded images the pitfolk deduced the whole of their reality. This made for a rather miserable experience, but it did not stop the pitfolk from building an entire way of life about the dance of dim shadows – black silhouettes against grey stone – that was their everything. Each day at noon, when the dim light against their backs shone brightest, the pitfolk gathered in a large circle with one tribe member or another in the center, and that chosen member would play out a long and complex dance with the long and dextrous fingers attached to his long and sinewy limbs. The whole circle would watch intently as the shadows cast from the dance leaped across the ground, swaying to a silent rhythm. The full performance, which lasted nearly two hours, had been passed down from parent to child as long as any living tribe member could remember, and probably longer. And although each generation brought into it their own unique flicks of the finger and twirl
7c603ccd-86ca-4d37-9255-d2e8e2c8a04c
trentmkelly/LessWrong-43k
LessWrong
Funding is All You Need: Getting into Grad School by Hacking the NSF GRFP Fellowship Fellowship is All You Need: Getting into Grad School by Hacking the NSF GRFP Last year I started having delusions of grad school. By delusions I mean that on paper, in the numeric and text fields that actually make it into the application, I looked utterly unqualified to do machine learning research, let alone at a top school. I had no computer science publications, no masters degree, and a 3.3 GPA, in materials engineering. I was a self-taught washed up startup founder six years out of school with no advisor and three months work experience in my proposed field of NLP. Only two people had ever seen me write code; one was my current boss and the other was my ex-cofounder, who also happened to be my brother, so I couldn’t even scrape together a single letter of recommendation attesting to my programming talents. I did take a programming class once, CSE142: Intro to Computer Programming I, where I got a C. But next month I’ll be starting my PhD studies at Columbia advised by Zhou Yu and Luis Gravano. They’re really really good at what they do and there’s no school I’d rather be at. This is the story of how I leveraged the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) to get my unqualified ass into grad school. The technical guide follows. Table of Contents * The Hack * Technical Guide * Abstract * Background * Normal Route * Funding is All You Need * What Does the NSF Want? * Methods * How to Win a GRFP * Take Risks * The Hero's Journey * Editing * Timeline * What to Do Once You've Won * Results & Conclusion * Footnotes * Appendix 1: Useful Links The Hack In June of 2021 I forgot to register my +1 at a wedding and got placed next to a cool dude named Logan who told me just to shoot my shot on PhD applications even though I didn’t feel qualified. After all, they’re kind of a crap shoot anyway. Lacking any proper mentorship I took this advice at face value and spent the
c2e2f8d1-7ad2-44b4-869a-6dae4ae574cb
trentmkelly/LessWrong-43k
LessWrong
Two sources of beyond-episode goals (Section 2.2.2 of “Scheming AIs”) This is Section 2.2.2 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?”. There’s also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I’m hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section here, or search "Joe Carlsmith Audio" on your podcast app. Two sources of beyond-episode goals Our question, then, is whether we should expect models to have goals that extend beyond the time horizon of the incentivized episode – that is, beyond the time horizon that training directly pressures the model to care about. Why might this happen? We can distinguish between two different ways. * On the first, the model develops beyond-episode goals for reasons independent of the way in which beyond-episode goals motivate instrumental training-gaming. I'll call these "training-game-independent" beyond-episode goals. * On the second, the model develops beyond-episode goals specifically because they result in instrumental training-gaming. That is, SGD "notices" that giving the model beyond-episode goals would cause it to instrumentally training-game, and thus to do better in training, so it explicitly moves the model's motives in the direction of beyond-episode goals, even though this wouldn't have happened "naturally." I'll call these "training-gaming-dependent" beyond-episode goals. These have importantly different properties – and I think it's worth tracking, in a given analysis of scheming, which one is at stake. Let's look at each in turn. Training-game-independent beyond-episode goals My sense is that the most common story about how schemers arise is via training-game-independent beyond-episode goals.[1] In particular, the story goes: the model develops some kind of beyond-episode goal, pursuit of which correlates well enough with getting reward-on-the-episode that th
5765dc12-d5e1-48ad-b35d-be9517716d12
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The shallow reality of 'deep learning theory' > *Produced under the mentorship of Evan Hubinger as part of the* [*SERI ML Alignment Theory Scholars Program*](https://serimats.org/) *- Winter 2022 Cohort* > > Most results under the umbrella of "deep learning theory" are not actually deep, about learning, or even theories. This is because classical learning theory makes the **wrong assumptions**, takes the **wrong limits**, uses the **wrong metrics**, and aims for the **wrong objectives**. Learning theorists are stuck in a rut of one-upmanship, vying for vacuous bounds that don't say anything about any systems of actual interest. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1676773245/mirroredImages/BDTfddkttFXHqGnEi/hpeyxystljvq3hgvkxlx.png)Yudkowsky tweeting about statistical learning theorists.  ([Okay, not really.](https://twitter.com/ESYudkowsky/status/1613622386150211584))In particular, I'll argue throughout this sequence that: * **Empirical risk minimization** is the wrong framework, and risk is a weak foundation. * In **approximation theory**, the universal approximation results are too general (they do not constrain *efficiency*) while the "depth separation" results meant to demonstrate the role of depth are too specific (they involve constructing contrived, unphysical target functions). * **Generalization theory** has only two tricks, and they're both limited: + **Uniform convergence** is the wrong approach, and model class complexities (VC dimension, Rademacher complexity, and covering numbers) are the wrong metric. Understanding deep learning requires looking at the microscopic structure *within* model classes. + **Robustness to noise** is an imperfect proxy for generalization, and techniques that rely on it (margin theory, sharpness/flatness, compression, PAC-Bayes, etc.) are oversold. * **Optimization theory** is a bit better, but training-time guarantees involve questionable assumptions, and the obsession with second-order optimization is delusional. Also, the NTK is bad. Get over it. * At a higher level, the obsession with deriving **bounds** for approximation/generalization/learning behavior is misguided. These bounds serve mainly as political benchmarks rather than a source of theoretical insight. More attention should go towards explaining empirically observed phenomena like double descent (which, to be fair, is starting to happen). That said, there are new approaches that I'm more optimistic about. In particular, I think  that [*singular learning theory*](https://www.lesswrong.com/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick) (SLT) is the most likely path to lead to a "theory of deep learning" because it (1) has stronger theoretical foundations, (2) engages with the structure of individual models, and (3) gives us a principled way to bridge between this microscopic structure and the macroscopic properties of the model class[[1]](#fnwg1aym99rf). I expect the field of mechanistic interpretability and the eventual formalism of phase transitions and "sharp left turns" to be grounded in the language of SLT. Why theory? ----------- A mathematical theory of learning and intelligence could form a valuable tool in the alignment arsenal, that helps us: * [**Develop and scale interpretability tools**](https://www.lesswrong.com/posts/rEMpTapcAzjTiSckf/on-developing-a-mathematical-theory-of-interpretability) * **Inspire better experiments** (i.e., [focus our bits of attention more effectively](https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance)). * **Establish a common language** between experimentalists and theorists. That's not to say that the right theory of learning is risk-free: * A good theory could inspire **new capabilities**. [We didn't need a theory of mechanics to build the first vehicles, but we couldn't have gotten to the moon without it.](https://metauni.org/slt/align-faq) * The wrong theory could **mislead us**. Just as theory tells us where to look, it also tells us where not to look. The wrong theory could cause us to neglect important parts of the problem. * It could be **one prolonged nerd-snipe** that draws attention and resources away from other critical areas in the field. Brilliant string theorists aren't exactly helping advance living and technology standards by computing the partition functions of black holes in 5D de-Sitter spaces.[[2]](#fndc6r5bvxb07) All that said, I think the benefits currently outweigh the risks, especially if we put the right infosec policy in place when if learning theory starts showing signs of any practical utility. It's fortunate, then, that we haven't seen those signs yet. Outline ------- My aims are: * To discourage other alignment researchers from **wasting their time**. * To argue for **what makes singular learning theory different** and why I think it the likeliest contender for an eventual *grand unified theory of learning*. * To invoke **Cunningham's law** — i.e., to get other people to tell me where I'm wrong and what I've been missing in learning theory. There's also the question of integrity: if I am to criticize an entire field of people smarter than I am, I had better present a strong argument and ample evidence. Throughout the rest of this sequence, I'll be drawing on notes I compiled from lecture notes by [Telgarsky](https://mjt.cs.illinois.edu/dlt/), [Moitra](https://people.csail.mit.edu/moitra/408c.html), [Grosse](https://www.cs.toronto.edu/~rgrosse/courses/csc421_2019/), [Mossel](http://elmos.scripts.mit.edu/mathofdeeplearning/2017/03/09/mathematics-of-deep-learning-lecture-1/), [Ge](https://courses.cs.duke.edu//fall16/compsci590.2/), and [Arora](https://www.cs.princeton.edu/courses/archive/fall18/cos597G/), books by [Roberts et al.](https://arxiv.org/abs/2106.10165) and [Hastie et al.](https://hastie.su.domains/ElemStatLearn/), a festschrift of [Chervonenkis](https://link.springer.com/book/10.1007/978-3-319-21852-6), and a litany of articles.[[3]](#fny5hhs1ffq7o) The sequence follows the three-fold division of approximation, generalization, and optimization preferred by the learning theorists. There's an additional preface on why empirical risk minimization is flawed (up next) and an epilogue on why singular learning theory seems different. * [The shallow reality of 'deep learning theory'](https://www.lesswrong.com/posts/BDTfddkttFXHqGnEi/the-shallow-reality-of-deep-learning-theory) (You are here) Coming soon: * Empirical risk minimization is fundamentally confused * On approximation— the cosmic waste of universal approximation * On generalization— against PAC-learning * On optimization— the NTK is bad, actually * What makes singular learning theory different?   1. **[^](#fnrefwg1aym99rf)**This sequence was inspired by my worry that I had focused too singularly on singular learning theory. I went on a journey through the broader sea of "learning theory" hopeful that I would find other signs of useful theory. My search came up mostly empty, which is why I decided to write >10,000 words on the subject. 2. **[^](#fnrefdc6r5bvxb07)**Though, granted, string theory keeps on popping up in other branches like condensed matter theory, where it can go on to motivate practical results in material science (and singular learning theory, for that matter). 3. **[^](#fnrefy5hhs1ffq7o)** I haven't gone through all of these sources in equal detail, but the content I cover is representative of what you'll learn in a typical course on deep learning theory.
55099026-b616-491a-94d6-8bbb6c7d6e06
trentmkelly/LessWrong-43k
LessWrong
Demons from the 5&10verse! The 5 and 10 error is a glitch in logical reasoning that was first characterized in formal proof by Scott Garrabrant of MIRI. While the original version of the problem was something specifically concerning AIs based on logical induction, it generalizes out into humans startlingly often once you know how to look for it. However, due to how rudimentary and low level the actual error in reasoning is, it can be both difficult to point out and easy to fall into, making it especially important to characterize. There is also a tremendous amount of harm being created by compounding 5&10 errors within civilization and escaping this destructive equilibrium is necessary in order for the story of humanity to end anywhere other than summoning the worst demon god it can find and feeding the universe to it. The error in reasoning goes like this: you’re presented with a pair of options, one of which is clearly better than the other. They are presented as equal choices, you could take $5 or you could take $10. This is a false equivalence being created entirely by the way you’re looking at the scenario, but when that equivalence gets into your reasoning it wreaks havoc on the way you think. One of these is clearly and unambiguously better than the other, if you have something you care about that runs through this, you will never make this mistake in that area because it will obviously be a dumb move. But these are being presented as equal options, you could take the $5 instead of the $10, and if you could do that, there must be a valid reason why you would do that. Otherwise you would be stupid and feel bad about it, so that can’t be right, you shouldn’t be feeling stupid and bad. This is where the demon summoning comes in. The space of all possible reasons why you would take a given action, for a fully general agent, is infinite, a sprawling fractal of parallel worlds stacked on parallel worlds, out to the limits of what you as an agent can imagine. “What if there was a world whe
21fd6d88-9b08-4d78-b30c-9c5c8179e5c8
trentmkelly/LessWrong-43k
LessWrong
South Bay Less Wrong Meetup Saturday Feb 26th There will be a Less Wrong meetup at 6:00PM this Saturday, the 26th of February, at Tortuga (850 Williams way, Mountain View, CA), the planned community founded by Patri Friedman.  Shannon Friedman will be our gracious host for the evening.   As usual, the meet-up will be party-like and full of small group conversations.  Feel free to bring food to share, or not.  Please RSVP at the meetup.com page if you plan to attend.
85b96fc9-4619-4907-8030-88545db79b68
trentmkelly/LessWrong-43k
LessWrong
Target for Tonight: A Drama In One Act (Cross posted from my Substack, A Flood of Ideas - https://mflood.substack.com/p/target-for-tonight-a-drama-in-one) Sources: * Max Hastings, Bomber Command * Frederick M. Sallagar, The Road to Total War (available online) * “Target for Tonight,” 1941 RAF propaganda film (available online) TARGET FOR TONIGHT A Drama in One-Act (picture generated by VQGAN+CLIP) By M. Flood Setting: It is the night of August 25, 1941, just after 8 o'clock. We are in a Royal Air Force Bomber Command airfield somewhere in the southeast of England. This is the barracks room of Second Squadron of D Station, Bomber Group 22. The previous day the Luftwaffe dropped bombs on London for the first time, the first time a civilian area has been deliberately targeted during the conflict. Characters: The crew of G Bomber, a twin-engine Vickers Wellington. These men have been flying together for a year. They are twenty-nine missions into a thirty-mission tour. If they survive tonight, they will be rotated off active duty to training section for a much needed rest. Together they have been through lightning strikes, fogged in landings, anti-aircraft fire, and one memorable near miss with a cow that had wandered onto their runway. They trust and depend on one another totally. The stress is wearing them down, though, and each crew member is manifesting that strain in his own way. • Captain Derek Lewis (Pilot, 24) - A calm and competent professional on the outside. In the air, he has total confidence in himself and his crew. On the ground, the last twenty-nine missions and countless near-misses have left him almost cripplingly superstitious. • Sergeant Christopher Sims (Co-Pilot, 23) - A joker, also a great sportsman. When on the ground he drives his motorcycle too fast, has one too many drinks at the puband at mess dinners, and tries a pass at any pretty woman he meets – anything to have the risks be entirely in his own hands. • Sergeant Perry Anderton (Navigator/Bombardier, 23) - Outwardly
b0e35343-b887-4d4c-a44c-2d584097fd2a
trentmkelly/LessWrong-43k
LessWrong
The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence Epistemic Status: The following seems plausible to me, but it's complex enough that I might have made some mistakes. Moreover, it goes against the beliefs of many people much smarter than myself. Thus caution is advised, and commentary is appreciated. I. In this post, I aim to make a philosophical argument that we (or anyone) cannot use simulation to create new consciousnesses (or, for that matter, to copy existing people's consciousnesses so as to give them simulated pleasure or pain). I here make a distinction between "something that acts like it is conscious," (e.g. what is commonly known as a 'p-zombie') and "something that experiences qualia." Only the latter is relevant to what I mean when I say something is 'conscious' throughout this post. In other words, consciousness here refers to the quality of 'having the lights on inside', and as a result it relates as well to whether or not an entity is a moral patient (i.e. can it feel pain? Can it feel pleasure? If so, it is important that we treat it right). If my argument holds, then this would be a so-named 'crucial consideration' to those who are concerned about simulation. It would mean that no one can make the threat of hurting us in some simulation, nor can one promise to reward us in such a virtual space. However, we ourselves might still exist in some higher world's simulation (in a manner similar to what is described in SlateStarCodex's 'The View from the Ground Level'). Finally, since one consequence of my conclusion is that there is no moral downside to simulating beings that suffer, one might prefer to level a Pascal's Wager-like argument against me and say that under conditions of empirical and moral uncertainty, the moral consequences of accepting this argument (i.e. treating simulated minds as not capable of suffering) would be extreme, whereas granting simulated minds too much respect has fewer downsides. Without further ado... II. Let us first distinguish two possible worlds. In the first,
6338466c-90c3-4fad-a348-b0575277e048
trentmkelly/LessWrong-43k
LessWrong
Paul Christiano on Dwarkesh Podcast Dwarkesh's summary: > Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out! > > We discuss: > > * Does he regret inventing RLHF, and is alignment necessarily dual-use? > * Why he has relatively modest timelines (40% by 2040, 15% by 2030), > * What do we want post-AGI world to look like (do we want to keep gods enslaved forever)? > * Why he’s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon, > * His current research into a new proof system, and how this could solve alignment by explaining model's behavior > * and much more.
4b865f10-bf17-48f2-b385-217cd3fbdc95
StampyAI/alignment-research-dataset/lesswrong
LessWrong
UNGA General Debate speeches on AI At the UN General Assembly's 2023 General Debate on September 19-26, many leaders spoke about artificial intelligence, and some specifically about existential risk and what to do about it. Attitudes ranged from advocating serious global action (Israel, the UK) to being completely dismissive (Malta). The following is a transcript of the parts of those speeches dealing with AI, sorted chronologically. Some particularly relevant parts bolded by me. These speeches can be useful for getting a rough sense of various nations' stances on the issue, and indicate surprisingly strong support from the world's governments for taking AI risk seriously.  **Day 1** ========= **United Nations** ------------------ *Delivered by UN Secretary-General António Guterres (*[*link*](https://www.youtube.com/live/g8f8K1s-c2A?si=tEJwRHnCLyecs0NR&t=1740)*)* We must also face up to the looming threats posed to human rights by new technologies. **Generative artificial intelligence holds much promise – but it may also lead us across a Rubicon and into more danger than we can control.** When I mentioned Artificial Intelligence in my General Assembly speech in 2017, only two other leaders even uttered the term. Now AI is on everyone's lips – a subject of both awe, and fear. **Even some of those who developed generative AI are calling for greater regulation.** But many of the dangers of digital technology are not looming. They are here. The digital divide is inflaming inequalities. Hate speech, disinformation and conspiracy theories on social media platforms are spread and amplified by AI, undermining democracy and fueling violence and conflict in real life. Online surveillance and data harvesting are enabling human rights abuses on a mass scale. And technology companies and governments are far from finding solutions. Excellencies, We must move fast and mend things. New technologies require new and innovative forms of governance – with input from experts building this technology and from those monitoring its abuses. And we urgently need a Global Digital Compact -- between governments, regional organizations, the private sector and civil society -- to mitigate the risks of digital technologies, and identify ways to harness their benefits for the good of humanity. Some have called for consideration of a new global entity on AI that could provide a source of information and expertise for Member States. There are many different models -- inspired by **such examples as the International Atomic Energy Agency, the International Civil Aviation Organization or the Intergovernmental Panel on Climate Change**. The UN stands ready to host the global and inclusive discussions that are needed, depending on the decisions of Member States. To help advance the search for concrete governance solutions, **I will appoint this month a High-Level Advisory Body on Artificial Intelligence – which will provide recommendations by the end of this year**. Next year's Summit of the Future is a once-in-a-generation opportunity for progress to deal with these new threats, in line with the vision of the UN Charter. Member States will decide how to move forward on the New Agenda for Peace, the Global Digital Compact, reforms to the international financial architecture, and many other proposals to address challenges and bring greater justice and equity to global governance. **United States** ----------------- *Delivered by President Joe Biden (*[*link*](https://www.youtube.com/live/g8f8K1s-c2A?si=hUBbnDLFsk9NWrT5&t=5160)*)* Emerging technologies, such as artificial intelligence, hold both enormous potential and enormous peril. We need to be sure they are used as tools of opportunity, not as weapons of oppression. Together with leaders around the world, the United States is working to **strengthen rules and policies so AI technologies are safe before they are released to the public; to make sure we govern this technology — not the other way around, having it govern us**. And I'm committed to working through this institution and other international bodies and directly with leaders around the world, including our competitors, to ensure we harness the power of artificial intelligence for good, while protecting our citizens from its most profound risk. It's going to take all of us. I've been working at this for a while, as many of you have. It's going to take all of us to get this right. **Slovenia** ------------ *Delivered by President Nataša Pirc Musar (*[*link*](https://www.youtube.com/live/g8f8K1s-c2A?si=6lepQd8cn4jiMzxR&t=22564)*)* Inventions are meant to advance humanity. Social media were not invented to disconnect us, but too often they do exactly that. Artificial intelligence can be useful but can it can also be dangerous. In this regard I applaud the Secretary General's resolve to form a high-level advisory body on artificial intelligence. We need to find a way to govern the development of new technologies, including artificial intelligence, in a way that does not impede economic, developmental, social and research opportunities, while not putting us at risk. A human-centric and human rights-based approach to the full life cycle of technologies, comprising their design development and application as well as decline should be the answer. The Global Digital Compact must be centered on this notion. Things can be done but all actors including private companies will need to be on board with honest and meaningful commitment. Ensuring that human rights are the foundation of an open, safe, digital future is not going to be an easy task. In saying so I look at the key meanings of our time: Disinformation. Unfortunately our time is once again a time of competing of narratives, only now they are much more complex as regards the threat they post to humanity. **Day 2** ========= **South Korea** --------------- *Delivered by President Yoon Suk Yeol (*[*link*](https://www.youtube.com/live/exwkakhx8wY?si=4_4ec43cS5VASiyM&t=17309)*)* To support the creation of an international organization under the UN and provide concrete directions for the development of AI governance, **the Korean government plans to host the 'Global AI Forum'**. Furthermore, we plan to collaborate closely with the 'High-Level Advisory Body on AI' being established by the UN to provide a network for communication and collaboration among global experts. **Monaco** ---------- *Delivered by Prince Albert II (*[*link*](https://www.youtube.com/live/exwkakhx8wY?si=giEWFXE5IxdS_4H5&t=27625)*)* **Amongst the issues of existential threats that we are facing, there is one which has become very widespread recently, and I am referring to of course to artificial intelligence**, which has in it both at the same time an immense potential for the SDGs, and unprecedented risks for peace and security. Looking for innovation is part of our very nature and needs to be supported when it's geared towards improving the lives of our people. We are of course fascinated by the advantages provided by digital tools. They simplify our daily life. But we cannot ignore the potential dangers they represent. Artificial intelligence very frequently is more effective than man in many tasks, but just like the language of Aesop **it could be the best and the worst thing for humankind**. Cyber attacks using artificial intelligence are already targeting critical infrastructure in hospitals and humanitarian operations conducted by the UN. Security and weapons industries use these techniques, and their potential leads us to an ethical problem: Can a machine decide on the human issue of life and death? These questions show that we all have a duty to build a framework of global governance and ethical norms, dealing with artificial intelligence. Therefore, concluding current works underway on this is of particular importance. **I welcome, therefore, works aimed at creating a high-level advisory body on this, within the UN, to work on international governance over artificial intelligence.** **Chile** --------- *Delivered by President Gabriel Boric Font (*[*link*](https://www.youtube.com/live/exwkakhx8wY?si=MhTWLFy0oHOqSyL_&t=28933)*)* A few hours ago, in fact, a mother had to report to the police how a group of boys at school had taken images of her daughter and, using artificial intelligence, created pornographic images of her daughter and distributed them. The world is changing. People have a right to privacy, to their integrity. All changes in technology throughout the history of humanity have been a major opportunity to create fairer societies, but if we get it wrong, they can also be sources of new Injustice, and in this context it is the obligation of all to create multilateral consensus to create an ethical framework for the development and use of new technologies such as artificial intelligence. We need a framework which takes the human rights perspective into account in the research and development of new technologies, in order to protect and ferment the dignity and the rights of peoples and individuals. Societies must make progress, there can be no doubt, but we have to do it in a responsible way. That is why understanding and better managing opportunities all the new technologies to put it to use for our benefit before it becomes a threat, is so important. And I would conclude by saying that this is an obligation, and it is what we've been doing in Chile since the Congress on the Future which I am absolutely sure you all have initiatives of a similar nature in your country. With humility, but with pride, I can state here today that my country can stand as a Latin American reference for the future of artificial intelligence, and we are going to work tirelessly in that responsible direction. Democracy, as I said, is memory, but it is also future, and that's why we think that technological development has to be a tool for unity not division. It has to promote the empowerment of all of society. **Italy** --------- *Delivered by Prime Minister Giorgia Meloni (*[*link*](https://www.youtube.com/live/exwkakhx8wY?si=Z65WgPlBNIJ-hNOz&t=38150)*)* Even what would seem at a superficial glance a tool that could improve the well-being of humanity, at a closer look can turn out to be a risk. Just think of artificial intelligence: the applications of this new technology may offer great opportunities in many fields, but **we cannot pretend to not understand its enormous inherent risks**. I'm not sure if we are adequately aware of the implications of technological development whose pace is much faster than our capacity to manage its effects. We were used to progress that aimed to optimize human capacities, while today we are dealing with progress that risks replacing human capacities. Because if in the past this replacement focused on physical tasks, so that humans could dedicate themselves to intellectual and organizational work, today the human intellect risks being replaced, with consequences that could be devastating particularly for the job market. More and more people will no longer be necessary in a world ever dominated by disparities, by the concentration of power and wealth in the hands of the few. This is not the world we want. And so I think we should not mistake this dominion for a free zone without rules. We need global governance mechanisms that ensure that these technologies respect ethical boundaries, that technological evolution is put to the service of humanity and not vice versa. We must guarantee the practical application of the concept of "algorethics"; that is, ethics for algorithms. There are some of the major themes that Italy plans to put at the center of the G7 in 2024. **Spain** --------- *Delivered by Prime Minister Pedro Sánchez Pérez-Castejón (*[*link*](https://www.youtube.com/live/exwkakhx8wY?si=ICOmZolKAarPbFEQ&t=39582)*)* Europe also has responsibility of ensuring a more person-centered and rights-oriented digital transformation. We need to lay the groundwork for the regulation of artificial intelligence, **a field whose vast possibilities do not outweigh the barely concealed risks**. In this regards Spain is committed to supporting the Secretary General's envoy on technology, providing resources and know-how in the development of multilateral governance in this area. **We would like to host the headquarters of the future international artificial intelligence agency.** **Day 3** ========= **China** --------- *Delivered by Vice President Han Zheng (*[*link*](https://www.youtube.com/live/eFtHe2CTTUQ?si=ztZIHlEiah4PBTFs&t=26497)*)* China supports the UN, with full respect for the governance principles and practices of all countries, in serving as the main channel in creating a widely accepted AI governance framework, standards and norms. **Nepal** --------- *Delivered by Prime Minister Pushpa Kamal Dahal (*[*link*](https://www.youtube.com/live/eFtHe2CTTUQ?si=V5w6HqZVH7dmAVMx&t=35443)*)* **The void of international governance on cyberspace and artificial intelligence demands multilateral regulation.** Dual use [?] of artificial intelligence urgently calls for informed deliberation on preventing its potential misuse, and strengthening of international cooperation. **Day 4** ========= **Israel** ---------- *Delivered by Prime Minister Benjamin Netanyahu (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=Brd84GLO38uOSR1t&t=2207)*)* Ladies and gentlemen, whether our future will prove to be a blessing or a curse will also depend on how we address perhaps **the most consequential development of our time: The rise of artificial intelligence**. The AI revolution is progressing at lightning speed. It took centuries for humanity to adopt to the agricultural revolution. It took decades to adapt to the industrial revolution. We may have but a few years to adapt to the AI revolution. The perils are great, and they are before us. The disruption of democracy, the manipulation of minds, the decimation of jobs, the proliferation of crime, and the hacking of all the systems that facilitate modern life. Yet even more disturbing, is the potential eruption of AI-driven wars that could achieve an unimaginable scale. And behind this perhaps looms **an even greater threat, once the stuff of science fiction, that self-taught machines could eventually control humans, instead of the other way around. The world's leading nations, however competitive, must address these dangers. We must do so quickly. And we must do so together. We must ensure that the promise of an AI utopia does not turn into an AI dystopia.** We have so much to gain. Imagine the blessings of finally cracking the genetic code, extending human life by decades, and dramatically reducing the ravages of old age. Imagine health care tailored to each individual's genetic composition, and predictive medicine that prevents diseases long before they occur. Imagine robots helping to care for the elderly. Imagine the end of traffic jams, with self-driving vehicles on the ground, below the ground and in the air. Imagine personalized education that cultivates each person's full potential throughout their lifetime. Imagine a world with boundless clean energy, and natural resources for all nations. Imagine precision agriculture and automated factories that yield food and goods in an abundance that ends hunger and want. I know this sounds like a John Lennon song. But it could all happen. Imagine, imagine that we could achieve the end of scarcity. Something that eluded humanity for all history. It's all within our reach. And here's something else within our reach. With AI, we can explore the heavens as never before and extend humanity beyond our blue planet. For good or bad, the developments of AI will be spearheaded by a handful of nations, and my country Israel is already among them. Just as Israel's technological revolution provided the world with breathtaking innovations, I'm confident that AI developed by Israel will once again help all humanity. I call upon world leaders to come together to shape the great changes before us. But to do so in a responsible and ethical way. Our goal must be to ensure that AI brings more freedom and not less, prevents wars instead of starting them and ensure that people live longer, healthier, more productive and peaceful lives. It's within our reach. And as we harnessed the powers of AI, let us always remember the irreplaceable value of human intuition and wisdom. Let us cherish and preserve the human capacity for empathy, which no machine can replace. Thousands of years ago, Moses presented the children of Israel with a timeless and universal choice. Behold, I said before you this day, a blessing and a curse. May we choose wisely between the curse and the blessing that stand before us this day. Let us harness our resolve and our courage to stop the curse of a nuclear Iran and roll back its fanaticism and aggression. Let us bring forth the blessings of a new Middle East that will transform lands once written with conflict and chaos, into fields of prosperity and peace. And may we avoid the perils of AI by combining the forces of human and machine intelligence, to usher in a brilliant future for our world in our time, and for all time. Thank you. **Malta** --------- *Delivered by Prime Minister Robert Abela (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=9dOF0ilNPFRXB9Yd&t=6439)*)* Right now, one of the biggest fears many have is of technology. Who have lived through two decades of unprecedented change, from the basic mobile phone and text messaging, to smartphones and face recognition. It can often seem like society has lost control, that the technology itself is in charge, and now with the advent of generative artificial intelligence the risk is that that seems truer than ever. **How we feel about our future is unfortunately not helped by deluded media headlines about the machines taking over**, spreading fear about AI taking jobs and rendering human effort obsolete. Let's be clear: AI will have a huge impact on all aspects of society but let us also be clear that if we as leaders take the right decisions, that impact can be a positive one on our societies. Again, as with trade, **the answer isn't to try and turn the clock back**, to close our eyes to the inevitable, and hope that it will go away. Indeed the answer is to get the future right, to take the decisions now, so that we can harness the power of AI for the public and the coming good, **not fear it as a coming catastrophe**. In Malta we are already doing just that. We are already seeing and experiencing how AI can enhance public service improving lives for all our citizens. We have currently six pilot projects covering areas from healthcare to traffic management, taking ownership through leadership. I'm not trying to ignore the future, though yes, naturally, there is a limit to what any one country especially a small one can do. To make AI a global good we need global action. Malta stands resolutely behind efforts to increase and enhance international cooperation on AI. Technology is changing too fast, its potential so vast that failing to work together isn't really an option anymore. **Barbados** ------------ *Delivered by Prime Minister Mia Amor Mottley (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=dBRRqzKAmOOZVsrZ&t=8997)*)* Our democracy cannot survive if we do not have the same facts, but yet we live in a world where the generation of fake news is almost a daily occurrence and where people act on those premises without consideration for whether the news is true or not. The role that generative artificial intelligence will play in our world must be for good purposes and not evil, but if we are to ensure that is the case that in an appropriate framework for regulatory action must be put in place. We therefore support the actions of the Secretary General, recognizing that the question will come one day from some as to whether you sought to preserve our democracy or whether you allowed it to crumble, and whether you have failed us as individual citizens of the world. We ask that question recognizing that AI is not in the immediate focal point of many because the drama and the crises that surround climate is taking out all of the oxygen, literally, in the world. **United Kingdom** ------------------ *Delivered by Deputy Prime Minister Oliver Dowden (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=MEy-7pzRH4ovRlGX&t=34351)*)* But I want to focus on another challenge. A challenge that is already with us today, and which is changing - right now - all of our tomorrows. It is going to change everything we do – education, business, healthcare, defence - the way we live. And it is going to change government – and relations between nations – fundamentally. It is going to change this United Nations, fundamentally. **Artificial Intelligence – the biggest transformation the world has known.** Our task as governments is to understand it, grasp it, and seek to govern it. And we must do so at speed. Think how much has changed in a few short months. And then think how different this world will look in five years or ten years' time. We are fast becoming familiar with the AI of today, but we need to prepare for the AI of tomorrow. At this frontier, we need to accept that we simply do not know the bounds of possibilities. We are as Edison before the light came on, or as Tim Berners-Lee before the first email was sent. They could not - surely - have respectively envisaged the illumination of the New York skyline at night, or the wonders of the modern internet. But they suspected the transformative power of their inventions. Frontier AI, with the capacity to process the entirety of human knowledge in seconds, has the potential not just to transform our lives, but to reimagine our understanding of science. If - like me - you believe that humans are on the path to decoding the mysteries of the smallest particles, or the farthest reaches of our universe, if you think that the Millenium Prize Problems are ultimately solvable, or that we will eventually fully understand viruses, then you will surely agree that by adding to the sum total of our intelligence at potentially dizzying scales, frontier AI will unlock at least some of those answers on an expedited timetable in our lifetimes. Because in AI time, years are days even hours. The "frontier" is not as far as we might assume. Now, that brings with it great opportunities. The AI models being developed today could deliver the energy efficiency needed to beat climate change, stimulate the crop yields required to feed the world, detect signs of chronic diseases or pandemics, better manage supply chains so everyone has access to the materials and goods they need, and enhance productivity in both business and governments. In fact, every single challenge discussed at this year's General Assembly – and more – could be improved or even solved by AI. Perhaps the most exciting thing is that AI can be a democratising tool, open to everyone. Just as we have seen digital adoption sweep across the developing world, AI has the potential to empower millions of people in every part of our planet, giving everyone, wherever they are, the ability to be part of this revolution. AI can and should be a tool for all. Yet any technology that can be used by all can also be used for ill. We have already seen the dangers AI can pose: teens hacking individuals' bank details; terrorists targeting government systems; cyber criminals duping voters with deep-fakes and bots; even states suppressing their peoples. But **our focus on the risks has to include the potential of agentic frontier AI, which at once surpasses our collective intelligence, and defies our understanding.** Indeed, many argue that this technology is like no other, in the sense that its creators themselves don't even know how it works. They can't explain why it does what it does, they cannot predict what it will - or will not - do. **The principal risks of frontier AI will therefore come from misuse, misadventure, or misalignment with human objectives.** Our efforts need to preempt all of these possibilities - and to come together to agree a shared understanding of those risks. This is what the **AI Safety Summit that the United Kingdom is hosting in November** will seek to achieve. Despite the entreaties we saw from some experts earlier in the year, **I do not believe we can hold back the tide**. There is no future in which this technology does not develop at an extraordinary pace. And although I applaud leading companies' efforts to put safety at the heart of their development, and for their voluntary commitments that provide guardrails against unsafe deployment, the starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible. Indeed, **the stated aim of these companies is to build superintelligence: AI that strives to surpass human intelligence in every possible way.** Now **some of the people working on this think it is just a few years away**. The question for governments is how we respond to that. The speed and scale demands leaders are clear-eyed about the implications and potential. We cannot afford to become trapped in debates about whether AI is a tool for good or a tool for ill; it will be a tool for both. We must prepare for both and insure against the latter. The international community must devote its response equally to the opportunities and the risks - and do so with both vigour and enthusiasm. In the past, leaders have responded to scientific and technological developments with retrospective regulation. But in this instance the necessary guardrails, regulation and governance must be developed in a parallel process with the technological progress. Yet, at the moment, global regulation is falling behind current advances. Lawmakers must draw in everyone - developers, experts, academics - to understand in advance the sort of opportunities and risks that might be presented. We must be frontier governments alongside the frontier innovators, and the United Kingdom is determined to be in the vanguard, working with like-minded allies in the United Nations and through the Hiroshima G7 process, the Global Partnership on AI, and the OECD. Ours is a country which is uniquely placed. We have the frontier technology companies. We have world-leading universities. And we have some of the highest investment in generative AI. And, of course, we have the heritage of the Industrial Revolution and the computing revolution. This hinterland gives us the grounding to make AI a success, and make it safe. They are two sides of the same coin, and **our Prime Minister has put AI safety at the forefront of his ambitions**. Now, we recognise that while, of course, every nation will want to protect its own interests and strategic advantage, the most important actions we will take will be international. In fact, because tech companies and non-state actors often have country-sized influence and prominence in AI, this challenge requires a new form of multilateralism. Because it is only by working together that we will make AI safe for everyone. Our first ever AI Safety Summit in November will kick-start this process with a focus on frontier technology. In particular, we want to look at the most serious possible risks such as the potential to undermine biosecurity, or increase the ability of people to carry out cyber attacks, as well as **the danger of losing control of the machines themselves**. For those that would say that these warnings are sensationalist, or belong in the realm of science-fiction, I simply point to the words of hundreds of AI developers, experts and academics, who have said - and I quote: "**Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.**" Now, I do not stand here claiming to be an expert on AI, but I do believe that policy-makers and governments ignore this expert consensus at the peril of all of our citizens. **Our Summit will aim to reach a common understanding of these most extreme risks, and how the world should confront them.** And at the same time, focus on how safe AI can be used for public good. The speed of this progress demands this is not a one-off, or even an annual gathering. New breakthroughs are happening daily, and we need to convene more regularly. Moreover, it is essential that we bring governments together with the best academics and researchers to be able to evaluate the technologies. Tech companies must not mark their own homework, just as governments and citizens must have confidence that risks are properly mitigated. Indeed, a large part of this work should be about ensuring faith in the system, and it is only nation states that can provide the most significant national security concern reassurance that has been allayed. Now that is why I am so proud that the United Kingdom's **world-leading Frontier AI Taskforce has brought together pioneering experts like Yoshua Bengio and Paul Christiano**, with the head of GCHQ and our National Security Advisers. It is the first body of its kind in the world that is developing the capacity to conduct the safe external red-teaming that will be critical to building confidence in frontier models. And our ambition is for the Taskforce to evolve to become a permanent institutional structure, with an international offer. Building this capacity in liberal, democratic countries is important. Many world-beating technologies were developed in nations where expression flows openly and ideas are exchanged freely. A culture of rules and transparency is essential to creativity and innovation, and it is just as essential to making AI safe. So that, ladies and gentlemen, is the task that confronts us. It is - in its speed, and its scale, and its potential - unlike anything we - or our predecessors - have known before. Exciting. Daunting. Inexorable. So we must work – alongside its pioneers – to understand it, to govern it, to harness its potential, and to contain its risks. We will have to be pioneers too. We may not know where the risks lie, how we might contain them, or even the fora in which we must determine them. What we do know, however, is that the most powerful action will come when nations work together. The **AI revolution will be a bracing test for the multilateral system, to show that it can work together on a question that will help to define the fate of humanity.** Our future - humanity's future - our entire planet's future, depends on our ability to do so. That is our challenge, and this is our opportunity. To be – truly – the United Nations. **Singapore** ------------- *Delivered by Minister for Foreign Affairs Vivian Balakrishnan (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=ngMwAyuUOAObGf6p&t=35768)*)* The third point I want to make is that in the midst of the Digital Revolution and especially the advent of artificial intelligence, we must prepare for both the risk and also distribute the benefits of these technologies more fairly. In the past year, generative AI, ChatGPT, has captured popular imagination, but actually we are already on the verge of the next stage: AI agents with the ability to negotiate and transact with each other and with humans, and often you won't be able to tell the difference. This has profound implications on all our societies, on our politics, and our economies everywhere. And autonomous weapon systems without human fingers on the trigger are already with us. Witness the wars around us, and as Secretary General Antonio Guterres said at the opening of the General Assembly this week, quote "generative AI holds much promise, but it may also lead us across a rubicon and into more danger than we can control", unquote, and this is especially so in the theater of war and peace. AI will fundamentally disrupt our assumptions on military doctrine and strategic deterrence. For example, the speed at which AI enable weapon systems can be almost instantaneously deployed and triggered will dramatically reduce decision times for our leaders, and there will be many occasions when humans may not even be in the firing loop, but we will be on the firing line. This would inevitably heighten the risk of unintended conflicts or the escalation of conflicts. During the Cold War, the sense of Mutually Assured Destruction imposed mutual restraint, although we now know in fact there were several close shaves. This specter of nuclear escalation has not disappeared. And yet the advent of artificial intelligence in conflict situations has actually increased the risk exponentially. So we must start an inclusive global dialogue, and we must start it at the United Nations. We need to urgently consider the oversight of such systems and the necessary precautions to avoid miscalculations. And in fact, this is just one facet of many as we focus our minds on how to harness the potential and to manage the risk of AI. Singapore welcomes the Secretary General's decision to convene a high-level advisory body on AI to explore these critical issues. Singapore actually is optimistic that the UN and the multilateral system will be up to the task of establishing norms on these fast-emerging critical technologies. The UN open-ended working group on ICT security, which happens to be chaired by Singapore, has made steady progress and this offers some useful lessons for other areas including AI. Singapore commits to continue to support all efforts to promote international cooperation and to strengthen global rules, norms and principles in the digital domain. We also look forward to the adoption of the global digital compact at the summit of the future in 2024. **Sweden** ---------- *Delivered by Minister for Foreign Affairs Tobias Billström (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=5nY85A97hCnvWjiv&t=39636)*)* Emerging Technologies including artificial intelligence are transforming our world. They offer unprecedented possibilities, including to accelerate our efforts on climate change, global health, and the Sustainable Development Goals. This fast-moving development also entails challenges for international security and human rights. Shaping a shared vision of new technologies based on the values of the UN charter will be key to harness their potential and mitigate the risks. Together with Rwanda, Sweden is co-facilitating the process in the general assembly of developing a Global Digital Compact that will outline shared principles for an open free and secure digital future for all. **Costa Rica** -------------- *Delivered by Minister for Foreign Affairs Arnoldo Ricardo André Tinoco (*[*link*](https://www.youtube.com/live/Ey1ylQscy4g?si=2iRlfHzSQbdiYV9b&t=40933)*)* We urgently need new governance frameworks for cyber crime and artificial intelligence, as well as cyber security. The militarization of new technologies raises specific problems and challenges. Therefore as per the content of the belaying communique and along with Austria and Mexico, we will be presenting a resolution to the General Assembly on the matter of autonomous weapons systems. **Day 5** ========= **Somalia** ----------- *Delivered by Prime Minister Hamza Abdi Barre (*[*link*](https://www.youtube.com/live/IjiFelSC3sY?si=pcbKy6ShyIzkhbXg&t=5272)*)* We see that new technologies such as artificial intelligence pose new and terrifying threats. **Ethiopia** ------------ *Delivered by Deputy Prime Minister Demeke Mekonnen Hassen (*[*link*](https://www.youtube.com/live/IjiFelSC3sY?si=OF1rRBmqB7l6EYzE&t=7042)*)* We should also ensure that new technologies such as artificial intelligence are used responsibly in a manner that benefits humanity. **Indonesia** ------------- *Delivered by Minister for Foreign Affairs Retno Lestari Priansari Marsudi (*[*link*](https://www.youtube.com/live/IjiFelSC3sY?si=K7Jc0FW8KapcpbYg&t=12976)*)* Access to safe and secure digital technology for developing countries, including AI, is crucial for future sustainable growth. **Iceland** ----------- *Delivered by Minister for Foreign Affairs Thórdís Kolbrún Reykfjörd Gylfadóttir (*[*link*](https://www.youtube.com/live/IjiFelSC3sY?si=9TnCUjTHkYRKxxX4&t=18709)*)* Artificial intelligence asks some serious questions that will demand close multilateral cooperation to avoid the very real risk of this technology becoming a tool of destruction rather than creation. And we must also bear in mind that the promise of human rights and freedom applies to individuals, but does not necessarily extend to state-sponsored propaganda or artificially generated misinformation that is intended to sow discord and disunity. Freedom of expression is for human beings, not for programmed bots that spread hate, lies and fear, because human rights are for human beings. **Venezuela** ------------- *Delivered by Minister for Foreign Affairs Yvan Gil Pinto (*[*link*](https://www.youtube.com/live/IjiFelSC3sY?si=dR2geZvhYaNFrx_q&t=31864)*)* The protection of cyberspace, the fight against cybercrime, the regulation of new information technologies, social media and artificial intelligence, must become a strategic priority for the United Nations. If we truly advocate for the defense of human rights and democratic principles, we must promote fair and equitable regulation that prevents the concentration of these new tools in the hands of a few, driven by their interests and control. We cannot accept the use of these new technologies to destabilize legitimate governments and destroy social harmony and peace. **Day 6** ========= **Holy See** ------------ *Delivered by Secretary for Relations with States Archbishop Paul Gallagher (*[*link*](https://www.youtube.com/live/R2eqf2jTDC0?si=MSlKArqzzjC_X2mk&t=6417)*)* Another important challenge which we have at hand could be defined, more generally, as the expanding digital galaxy we inhabit, and specifically artificial intelligence. There is an urgent need to engage in serious ethical reflection on the use and integration of supercomputer systems and processes in our daily lives. We must be vigilant and work to ensure that the discriminatory use of these instruments does not take root at the expense of the most fragile and excluded. It is not acceptable that the decision about someone's life and future be entrusted to an algorithm. This is valid in all situations, also in the development and use of Lethal Autonomous Weapons Systems. Recently, a growing number of legal and ethical concerns have been raised, about the use of LAWS in armed conflicts. It is clear that their use should be in line with the international humanitarian law. The Holy See supports the establishment of an International Organization for Artificial Intelligence, aimed at facilitating the fullest possible exchange of scientific and technological information for peaceful uses and for the promotion of the common good and integral human development. **Nicaragua** ------------- *Delivered by Minister for Foreign Affairs Denis Ronaldo Moncada Colindres (*[*link*](https://www.youtube.com/live/R2eqf2jTDC0?si=nCrYA0m68NsCwUSK&t=10398)*)* In the face of all the advances in science, technologies, techniques, including those pretentiously called "artificial intelligence" as if not themselves the fruit of human intelligence, Nicaragua demands the full participation of the peoples who have made possible with our blood and resources, these advances of the conquerors. We demand what is ours: the right to fully live this development improving the conditions of labor, study and life, by right of our peoples. At the same time as we propose inclusion, Nicaragua also proposes the rational beneficial use of these resources of humanity, which in the hands of the malevolent constitute weapons of mass destruction against countries, peoples and communities. **Canada** ---------- *Delivered by Ambassador to the United Nations Bob Rae (*[*link*](https://www.youtube.com/live/R2eqf2jTDC0?si=GMCxAoG-fSO9ut67&t=13928)*)* [Canadians] are also concerned about artificial intelligence, foreign interference, misinformation, and disinformation.